exponential change: what drives it, what does it tell us about the future?

135
Exponential Change: What drives it? What does it tell us about the future? Jeffrey Funk Associate Professor National University of Singapore Christopher Magee Professor MIT Part I is available in this document. The 1

Upload: jeffrey-funk-creating-new-industries

Post on 13-Jan-2015

10.866 views

Category:

Business


1 download

DESCRIPTION

This book, which is available on Amazon (http://www.amazon.com/Exponential-Change-drives-about-future-ebook/dp/B00HPSAYEM), describes the drivers of exponential change and what these drivers tell about the future. Based on an analysis of more than 50 technologies, it shows that exponential change is driven by: 1) the creation of new materials that better exploit a physical phenomena and 2) changes in scale. The creation of new materials has enabled improvements in the strength to weight of materials, the luminosity per watt of LEDs and in other dimensions for many other technologies. Changes in scale include both increases and reductions in scale. Production, energy, and transportation-related equipment typically benefit from increases in scale while integrated circuits (ICs), magnetic storage, MEMS (microelectronic mechanical systems), and bio-electronic ICs for DNA sequencing benefit from reductions in scale.

TRANSCRIPT

Page 1: Exponential change: what drives it, what does it tell us about the future?

Exponential Change:

What drives it?

What does it tell us about the future?

Jeffrey Funk

Associate Professor

National University of Singapore

Christopher Magee

Professor

MIT

Part I is available in this document. The entire book, including Part II

(about the future) is available from Amazon.com for $2.99.

http://www.amazon.com/Exponential-Change-drives-about-future-

ebook/dp/B00HPSAYEM

1

Page 2: Exponential change: what drives it, what does it tell us about the future?

Table of Contents

Chapter 1. Introduction

Part I: What drives exponential change?

Chapter 2. Creating materials to better exploit physical phenomena

Chapter 3: Reductions in scale

Chapter 4. Increases in scale

Part II: What does this tell us about the future?

Chapter 5. Integrated Circuits and Electronic Systems

Chapter 6. Micro-electronic mechanical system

Chapter 7. Nanotechnology & Nano-materials

Chapter 8. Electronic Lighting

Chapter 9. Displays

Chapter 10. Health Care

Chapter 11. Telecommunications

Chapter 12. Human-Computer Interface

Chapter 13. Superconductivity

Chapter 14. Conclusions

2

Page 3: Exponential change: what drives it, what does it tell us about the future?

Chapter 1. Introduction

“There is nothing permanent except change,” said by Heraclitus more than 2000 years ago,

is the essence of our daily business news. New products and services are released, new firms

are formed, existing firms are acquired or go bankrupt, and new governments including new

political systems continuously emerge. One key driver of this change is the market based

economy and all its supporting institutions. Smoother functioning financial, insurance, and

regulatory systems facilitate the emergence of new products and services and the formation of

new firms. A second key factor is new ways of organizing work. Work can be divided into

new and different ways where obtaining the benefits from new technologies often requires

new forms of organizations. A third key factor is better methods of communication.

From postal mail services to the printing press, telegraph, telephone, and now the Internet,

better communication has facilitated change. New communication technologies speed up the

flow of information and thus promote new ideas, technologies, strategies, policies, and even

political change. For example, the recent political upheavals in the Middle East are partly due

to new communication mediums such as Facebook and Twitter.

However, a smoother functioning market economy, new ways of organizing, and better

communication technologies are not the whole story. Market-based economies only indirectly

lead to better products and services and thus better standards of living, and they do this only

when better techniques or technologies are available. Without these better techniques and

technologies, an improved ability to commercialize them would be meaningless. Similar

arguments can be made for new forms of organizing work or better communication

technologies. Without the better technologies, there is no need for new methods of organizing

work and there is no information to spread.

3

Page 4: Exponential change: what drives it, what does it tell us about the future?

Furthermore, better communication technologies are themselves based on other new

technologies. The low cost of uploading and downloading vast amounts of data with

computers, mobile phones, and other electronic devices makes the Internet so popular and

powerful because this high “bandwidth” enables the inexpensive transmission of books,

reports, music, movies, and other video. Without its extremely high (and rapidly increasing)

bandwidth, the Internet wouldn't be much different from the telephone, the facsimile, and the

television.

But why have these communication (and other) technologies experienced such rapid

improvements, while others have not? The doubling in the performance of communication

and other electronic-based technologies every one to two years is often termed “exponential”

to reflect their rapid rate of improvements. In contrast to “linear” improvements i that are

experienced by many technologies each year, the doubling in the performance of

communication technologies every one to two years has over decades led to many orders of

magnitude improvements in the cost and speed of wireline and wireless transmission.

Understanding why these and other technologies experience such exponential improvements

while others do not is essential to understanding when new technologies might become

economically feasible, their probable impact on our world, and the degree to which specific

technologies can help us solve global problems. More specifically, understanding these

technologies helps us understand the different ways in which we can design systems, the

alternative ways in which individuals, organizations, and societies can invest their financial

and human resources, and the types of policies that are needed to implement better systems.

1.1 What technologies are experiencing exponential change?

When we actually look at the things around us, the rates of change and improvement vary

considerably. The productivity of farms have been dramatically improved as better seeds have

4

Page 5: Exponential change: what drives it, what does it tell us about the future?

been developed and larger and more sophisticated machines have been implemented. The

cost of electricity, processed metals, chemicals, and transportation also fell by many orders of

magnitude particularly in the first half of the 20th century largely as new processes were

developed and the scale of electrical generating plants, chemical and metal processing

factories, and transportation (e.g., oil tankers, freighters, and trucks) equipment were

implemented. The cost of transportation continued to fall in the second half of the 20 th

century as computers and electronics enabled improvements in the utilization and

coordination of railroads, trucks, aircraft and the containers transported by this equipment.

The falling cost of computers and electronics is particularly large. Nine orders of magnitude

improvements over 50 years in the processing speed of computers (and routers and servers)

largely come from similar levels of improvements in the cost and speed of integrated circuits

(ICs). Often called Moore’s Law, an ability to reduce the size of transistors, memory cells,

and other features on the integrated circuits (ICs) that are fabricated from semiconductors

such as silicon have enabled many orders of magnitude improvements in the cost, speed, and

functionality of ICs and computers. Improvements in the memory capacity of computers also

come from reductions in the size of memory cells on semiconductor ICs and on magnetic

media; the latter improvements are often represented in terms of the magnetic recording

density of hard disk platters and magnetic tape. Although the reasons for the improvements

vary, similar stories can be told for LEDs (light-emitting diodes), lasers, displays, glass fibers

for fiber optic cable, and other components that make up the Internet.

The falling cost of electronics and computers have had a large impact on a number of

different types of systems including transportation (partially noted above), retail, wholesale,

manufacturing, financial, health care, construction, electricity and educational systems.

Improvements in computers have enabled dramatic improvements in the cost and

performance of logistics, whether these logistics are in a factory or between factories,

5

Page 6: Exponential change: what drives it, what does it tell us about the future?

wholesalers, and retailers. Much of the improvements in the cost of logistics can be measured

in terms of an increased frequency of inventory turns as factories, retailers, and wholesalers

use computers and the Internet to more quickly respond to changes in demand. Improvements

in computers have also impacted positively to varying degrees on finance, health care,

electricity, and education as they help these industries more effectively manage information.

However, in spite of these improvements, the cost of health care, construction, electricity

and education have remained flat or in some cases have risen over the last 50 years as have

the prices of automobiles and other discrete-parts manufactured products. Health care costs

are rising largely because the demand for new treatments rises faster than does the

improvements in cost and performance for them. Construction and education costs rise

because it has been hard to automate many tasks (at least so far). Electricity costs rise

because the benefits from increased scale were reached 60 years ago, fuel and environmental

costs are rising, and new technologies such as wind turbines (2% a year) and batteries (5% a

year) experience very slow rates of improvementii. Automobile costs rise because new

functions are being added, increases in scale provide few benefits, and further improvements

in factory productivity became difficult once the easiest tasks were automated in the first half

of the 20th century.

Other manufactured goods have also not experienced rapid improvements in cost or

performance. Consider our homes and the possessions in our kitchens, bedrooms, and

bathrooms. Outside of PCs and televisions, most of our furniture, bathroom fixtures,

appliances, and other possessions (including our homes) are only marginally technologically

better than they were 50 years ago. Instead, any reductions in price primarily come from their

manufacture in low-wage countries while qualitative improvements in their quality primarily

come from greater wealth. The former was enabled by falling transportation costs while the

latter is being driven by the technologies that are experiencing exponential improvements in

6

Page 7: Exponential change: what drives it, what does it tell us about the future?

cost and performance. We are wealthier because exponential improvements in some

technologies have led to large increases in economic productivity and these increases in

productivity have enabled us to obtain better products and services whose costs and

performance have not experienced exponential improvements. This is the main reason we live

in larger homes, drive bigger cars and boats, and eat fancier foods.

1.2 What drives exponential improvements?

Because we are surrounded by exponential change, we take it for granted and fail to

understand the drivers of exponential improvements. We see better and cheaper mobile

phones, computers, video game consoles, and televisions released each year and many of us

think that such improvements are common to all products and services. This prevents us from

understanding the sources of these improvements and why some products experience more

rapid improvements than do others.

Some people might use the term innovation to describe these improvements and the sources

of them. We believe that the word innovation is one of the most over-used words in the

business and economics literature and the over-use has resulted in the word representing a

black box in which we somehow magically find solutions. Smart people are somehow able to

reach into this black box (sometimes by using another cliché “thinking outside the box”) and

pull out solutions better than rest of us.

For example, many people apparently believe that individuals such as Steve Jobs are

continuously finding revolutionary designs that are much more effective and efficient than

previous designs. Furthermore, some may have concluded that if all managers acted like

Steve Jobs, all technologies would experience exponential improvements. In our view, too

many management books encourage this simplistic thinking by focusing on innovative

managers, innovative organizations, and their flexibility and open-mindedness. By ignoring

why some technologies experience more improvements in cost and performance than do

7

Page 8: Exponential change: what drives it, what does it tell us about the future?

others, they dangerously imply that the potential for innovation is the same everywhere and

thus all technologies have about the same potential for improvements.

The fact is that improving the cost or performance of a technology is not easy and

improving them by orders of magnitude is very difficult. We can automate tasks, rearrange

equipment and process steps in a novel way, or put in better material handling systems, which

are the types of changes that are often captured in so-called learning or experience curves. On

the performance side, we can rearrange parts, combine them in a novel way, or implement a

more elegant design. But such changes to either the product or process design will not by

themselves lead to a doubling in the performance of a product’s performance or the halving

of a product’s cost every few years such that orders of magnitude improvements emerge over

decades.

Such design changes are however, needed to utilize the power of exponential

improvements. Steve Jobs and Apple did this with the iPod, iPhone, and the iPad and other

firms have done this with a much longer list of products. Exponential improvements in a

number of components enabled Apple to introduce products that are far superior to existing

ones in terms of functionality, aesthetic design, and price. Exponential improvements in

magnetic recording density and ICs enabled Apple to design the first iPod with a very small

disk drive and in a very elegant way. Continued improvements in the performance and cost of

ICs enabled the introduction of the iPod Nano, iPhone, and in combination with better

displays, the iPad. Without the exponential improvements in ICs, magnetic recording and

electronic displays, none of these products would have been economically or even technically

possible. The genius of Steve Jobs and Apple was not only that they were able to design such

products, it was also that they were able to recognize the power of exponential improvements

and how and when these improvements would make the iPod, iPhone, and iPad economically

possible. Unlike other firms that probably accepted poor performance from their initial

8

Page 9: Exponential change: what drives it, what does it tell us about the future?

products or believed that there was no market because early products failed, Steve Jobs and

Apple realized that the exponential improvements would continue and eventually make new

designs in these products economically feasible.

There are several key issues here. One issue involves understanding when these exponential

improvements make new designs possible. A second involves understanding which

technologies are or will experience exponential improvements in performance and cost and

thus which technologies should receive our focused attention. Other issues include how to

manage these processes. This book focuses on the first two issues.

Our research suggests that improvements in performance and cost are largely driven by two

mechanisms: 1) creating new materials (and often their associated processes) to better exploit

their underlying physical phenomena; and 2) geometric scaling. Some technologies directly

experience improvements through these two mechanisms while those consisting of higher-

level “systems” indirectly experience them through improvements in specific “components.”

Our research also shows that the most rapid rates of improvements are primarily driven by a

subset of these two mechanisms and this partly explains why electronic-based technologies

have experienced such rapid rates of improvement. First, creating new materials (and

processes for them) most effectively lead to rapid improvements in performance and cost

when new classes of materials are continuously being created and when microstructures (e.g.,

thin films for electronics) are constructed with these materialsiii. Second, technologies that

benefit from reductions in scale (e.g., for electronics) have experienced much more rapid

improvements than have technologies that benefit from increases in scale (e.g., for energy).

Understanding these mechanisms can help firms, universities, and governments choose

technologies that have the potential for rapid improvements and choose technologies that will

help us to solve problems, including global ones.

The first mechanism involves the creation of materials that better exploit physical

9

Page 10: Exponential change: what drives it, what does it tell us about the future?

phenomena. The word “create” is used because scientists and engineers often create materials

that do not naturally exist (as opposed to finding them) and in doing so must also create the

processes for the materials. Much of these improvements come from creating new classes of

materials while smaller ones involve modifications to either existing materials or processes.

Rates of improvements are also large when the materials are used for microstructures such as

diodes, transistors, other P-N junctions (e.g., solar cells), and quantum wells and dots for

lasers. As described in Chapter 2, the realization and exploitation of the physical phenomena

that forms the basis of batteries, lighting, displays, vacuum tubes, ICs, magnetic storage, and

solar cells requires a specific type of material and creating better material has taken many

years. The better material exploited the physical phenomena more efficiently than does other

materials and this higher efficiency also often led to lower costs as fewer materials are

needed.

Strong bases of scientific knowledge facilitated the creation of these better materials and

without these broad and deep knowledge bases it would not have been likely that these

materials would have been created. Thus, supporting scientific research is a key part of

creating these new materials. Looking forward, we need support for this science in order to

help engineers create the right materials such as those for solar cells. Improvements in our

understanding of photovoltaic materials helps us improve their efficiencies and funding this

type of science is much more cost effective than subsidizing the production of solar cells.

A second mechanism of improvement involves changes in scale. Some technologies have

benefited from increases in scale and some have benefited from reductions in scale, which

were mentioned above and are addressed in more detail in Chapters 3 and 4. For example,

engines, steam turbines, ships, and airplanes have benefited from increases in scale; this is

why we have large electrical generating stations, transportation equipment, and facilities for

handling these large ships and airplanes. For improvements in reductions in scale, examples

10

Page 11: Exponential change: what drives it, what does it tell us about the future?

include ICs, magnetic disks and tape, and optical disks.

Some readers are probably baffled by this logic? How can changes in scale have such a

large impact on the performance and cost of technologies? One way to understand the

importance of scale is to start with a familiar example. Most of us have noticed that short and

thin people feel colder in an air conditioned room or in a cold climate than do tall and not so

thin people. The reason is that heat production rises with volume while heat loss rises with

surface area. The result is that heat production rises faster than does heat loss as the

dimensions for people (and other mammals) are increased. This causes short and thin people

to feel colder than others and it also provides an incentive for thin people to gain weight.

Thus, people in northern latitudes are often heavier than people in equatorial regions; similar

arguments are made for many organismsiv.

We build large furnaces and smelters for the same reasons. Since heat loss rises more slowly

than does heat production, large furnaces and smelters have lower heat loss per output than

do small furnaces, which is important since energy costs are a significant fraction of the

operating costs for furnaces and smelters. A second reason we build large furnaces and

smelters is that capital costs rise with surface area and output rises with volume since capital

costs primarily involve an outer shell and hopefully a thin one. In later chapters, similar logic

(and supporting data) is applied to pipes and reaction vessels in chemical plants and to

transportation equipment such as oil tankers, freighters, trucks, and aircraft.

But size brings disadvantages. Because weight usually rises with volume and strength rises

with surface area, larger furnaces, smelters, pipes, reaction vessels and transportation

equipment require thicker walls unless better materials are available. Furthermore, even if the

better materials are available, they may cost more money than the lower performing materials

and this will reduce the benefits from making increases in scale. For example, airplanes have

benefitted from increases in scale to a lesser extent than have oil tankers, freight ships, buses

11

Page 12: Exponential change: what drives it, what does it tell us about the future?

or trucks because weight is more important for aircraft than for other transportation

equipment and expensive composites must be used in order to increase the scale of aircraft.

Similar arguments can be made with humans and other mammals. For example, large

mammals like elephants require heavy legs that cannot move as fast as those of a gazelle or a

cheetah in order to support their large size. This is because muscle strength rises as a muscle’s

cross sectional area (dimension squared) while weight rises with volume (dimension cubed).

Thus, increases in scale bring new challenges for both organisms and for technologies.

However, while organisms require these challenges to be solved by accidental mutations that

may take thousands if not millions of generationsv, humans can purposely redesign the

technologies. For example, as the strength-to weight ratio of steel and other materials were

improved over the last few centuries, larger furnaces, smelters, reaction vessels, and pipes

have been implemented without requiring much increases in the thickness of their steel walls.

On the other hand, some technologies benefit from reductions in scale. Reducing the scale

of transistors, memory storage regions, other features, and creating the processes needed to

achieve these reductions in scale has led to many orders of magnitude improvements in the

cost and performance of microprocessor and memory ICs and magnetic storagevi. This is

because for these technologies, reductions in scale lead to improvements in both performance

and cost. For example, placing more transistors or magnetic storage regions in a certain area

increases the speed and functionality and reduces both the power consumption and size of the

final product, which are improvements in performance for most electronic products (they also

lead to lower material, equipment, and transportation costs). The combination of both

increased performance and reduced costs as size is reduced has led to orders of magnitude

improvements over many years in the performance to cost ratio of many electronic

components. For example, three orders of magnitude reductions in transistor length have led

to about nine orders of magnitude improvements in the cost of transistors on a per transistor

12

Page 13: Exponential change: what drives it, what does it tell us about the future?

basis.

Here again, analogies can be made with organisms. As the size of organisms become

smaller, their strength-to weight ratios rise thus enabling small ants to carry more than their

own weight and their weight-to surface ratios fall thus enabling water bugs to literally walk

on water. For ants, this is because muscle strength falls as a muscle’s cross sectional area

(dimension squared) while weight falls with volume (dimension cubed) and thus the strength-

to weight ratios rise as dimensions fall. For water bugs, this is because the surface area of

their feet falls with dimension squared while the weight falls with dimension cubed and thus

the weight-to surface area falls as dimensions fall and thus there is more surface area to

support a unit of weight. Furthermore, as size falls below a millimeter, molecular forces

become more important than do gravity and thus small organisms must exploit these

molecular forces to survive. Eventually we reach the sizes of cells, where high surface-areas

to volumes are important, and the sizes of DNA, which is the information storage device for

organisms.

One key point is that there are no near-term limits to reducing the size of features that are

used to store and process information, which was first noted by Richard Feynman in his

famous 1959 speech “There's Plenty of Room at the Bottom: An Invitation to Enter a New

Field of Physics.” Thus, we can use atoms, electrons or even spins of electrons to store

information. A second key point is that other phenomena also benefit from reductions in

scale. Finding these phenomena is a key challenge for entrepreneurs in for example, bio-

electronic ICs, micro-electronic mechanical systems (MEMS), and nanotechnology. As

mentioned in the next section and addressed throughout the book, some phenomena benefit

from reductions in scale and thus as humans continue to become better at reducing the scale

of things, these phenomena and the devices and systems that incorporate them will likely

experience exponential improvements in cost and performance. Third, as users of

13

Page 14: Exponential change: what drives it, what does it tell us about the future?

technologies, we will notice these exponential improvements primarily in systems that

incorporate these “components” and thus we may not even know the reasons for the

improvements in systems. For example, rapid improvements in mobile phones come from

improvements in ICs, which benefit from reductions in scale. Fourth, the search for these

systems along with better materials, levels of scale, and new organizational forms is an

evolutionary process in which new materials, levels of scale, and organizational forms are

being continuously tried and selected and where incumbent firms often fail. Chapters 3 and 4

and Part II describe how these improvements have and continue to create new systems and

opportunities for new entrants. We can use these improvements to create not only new

products and services, but new forms of homes, workplaces, cities, and other higher order

systems.

1.3 Are these exponential improvements S-curves?

The predominant viewpoint is that improvements in performance or in performance per

cost follow an S-curve, first described in Richard Foster’s 1985 book The Attacker

Advantage. Following a rather flat rate of improvement, the rate of improvement accelerates

thus leading to a rather steep rate of improvement; later it slows. For the early part of the

purported S-curve, Foster argued that improvements accelerate as vertically integrated firms

and specific government agencies move research funds from an old to a new technology in

response to increases in demand or a slowdown in the rate of improvement in the old

technology. Some call this punctuated equilibrium, in honor of the sudden jump in the

number of biological species during the so-called Cambrian explosion. For the later part of

the purported S-curve, Foster argued that the rates of improvement slow as diminishing

returns and natural limits emerge; this causes research funds to move to a still newer

technology and thus the newer technology’s rate of improvement begins to acceleratevii.

14

Page 15: Exponential change: what drives it, what does it tell us about the future?

This predominant viewpoint is so ingrained in our thinking that many books will describe

this predominant viewpoint even as they show figures of Moore’s Law and other technologies

in which straight lines are fairly evident on log performance vs time plots. One of the best

examples can be found in Kevin Kelly’s What Technology Wants. After showing data on the

number of transistors per chip, areal recording density, and other technologies, each with

straight lines on log performance vs. time curve plots, he then shows a figure with the classic

S-curve. The theory of S-curves is a good example of a field trying to fit the data to an old

theory even when the theory does not fit the facts.

We believe that there is a better explanation for rates of improvements and the roughly

straight lines on log performance vs time curves that they represent and that are shown

throughout this book. Research on new technologies is done in a very decentralized world in

which millions of researchers, one estimate is six millionviii, compete for publications,

prestige and fame, curiosity is major driver of their efforts, and they quickly incorporate new

information into their search efforts. Rather than wait for the improvements in an old

technology to slow, they look for and combine new scientific phenomena, new explanations

and applications for them, and materials that better exploit these phenomena. They attempt to

reduce the scale of new technologies that may replace ICs and to combine existing and new

components and materials into new systems.

The decentralized world of funding supports this decentralized world of research. Unlike

the vertically integrated firm that perhaps underpinned the S-curves in Richard Foster’s The

Attacker Advantage, we live in a vertically disintegrated world where funding decisions are

made by tens if not hundreds of thousands of people. Most researchers, including ones in

universities, government labs, and even in corporations, are expected to investigate new

technologies, to create their own research plans and to publish something new and different.

This enables and requires them to quickly move their efforts to newly found scientific

15

Page 16: Exponential change: what drives it, what does it tell us about the future?

phenomena, materials, components, and systems long before the improvements in an old

technology have slowed. Thus, there is no flat line on a log plot that precedes acceleration

and instead there are many performance and/or cost curves that are competing with the curves

of the dominant technology.

Henry Chesbrough describes this world using the term Open Innovation. Unlike Richard

Foster’s world of large vertically integrated firms where firms develop the technologies that

they use in their products, firms both buy and sell technology of which many new

technologies simultaneously compete for our attention. Small firms may focus on selling

technology and the only way to succeed in such a business is to focus on the early years of a

new technology. This enables and requires them to quickly move their efforts to newly found

scientific phenomena, materials, components, and systems long before the improvements in

an old technology have slowed.

One caveat to this argument is that if rates of improvements are extremely rapid such that

orders of magnitude improvements are experienced, the improvements must be plotted on a

logarithmic plot. If not, only the most recent data will appear as improvements and the older

data points will be essentially flat. For example, if one were to plot Moore’s Law on a linear

scale, none of the improvements before 2005 would look important, in spite of the fact that

prior improvements made the personal computer, mobile phone, and the Internet

economically feasible.

Focusing on the later part of the S-curve, we believe that the notion of limits is also over

emphasized. Diminishing returns do emerge for many technologies, particularly if one plots

improvements vs. research efforts. Since research funding has increased over time for most of

the technologies discussed in this book, even straight lines for improvements over time

suggest diminishing returns with respect to effort, which is somewhat consistent with Foster’s

arguments. Nevertheless, these straight lines are on a log plot so the rates of improvements

16

Page 17: Exponential change: what drives it, what does it tell us about the future?

are very rapid. Furthermore, many of the technologies discussed in this book do not show

actual limits on a log plot and as far as actual data can uncover, we are probably still far from

the physical limits and thus at this point S-curve theory is more myth than fact.

In any case, the existence of straight lines without sudden jumps helps us understand

when new technologies become economically feasible. It allows us to ignore the purported

source of jumps in performance during the early years of a technology and focus on the rather

steady improvements in performance and our cost that occur long before a technology is

commercialized on a broad scale.

1.4 Thinking about the future of new technologies

There are many ways to think about future technologies and each of them has its own

limitations. The most common way to think about the future is to talk with the experts and

find out what is or will become scientifically and technically feasible in the near future. Then

through your own knowledge of customer needs or through some investigation of specific

applications, one can consider how these scientifically and technically feasible technologies

might solve specific problems or provide basic human needs. Herman Kahn, Michio Kaku,

and Mark Stevensonix are among the many scientists and engineers who have used this

approach to describe possible futures. The basic problem with this approach is that all

scientifically and technically feasible technologies do not become economically feasible.

While technologies must be scientifically and technically feasible before they can become

economically feasible, all scientifically and technically feasible technologies do not become

economically feasible and understanding those that will is highly problematic.

One reason it is highly problematic is that many assume that once we begin making things,

they get cheaper through the so-called learning or experience curve. But as we discussed

above, some technologies experience much faster rates of improvement than do other

technologies and these technologies have a better chance of becoming economically feasible

17

Page 18: Exponential change: what drives it, what does it tell us about the future?

than do other technologies. Part II will discuss many technologies that have experienced rates

of improvement of greater than 15% a year with little or no commercial production. A second

reason it is difficult to identify the technologies that will become economically feasible

revolves around cognitive biases.

According to research by Nobel Laureate Daniel Kahnemanx, people tend to assess the

relative importance of issues, including technologies, by the ease with which they are

retrieved from memory and this is largely determined by the extent of coverage in the media.

For example, currently the media talks about wind, battery-powered vehicles, bio-fuels, and

solar cells and thus many people think these technologies are experiencing rapid rates of

improvement and will soon become economically feasible. Furthermore, judgments and

decisions are guided directly by feelings of liking and disliking, with little deliberation and

reasoning. Kahneman recounts a conversation he had with a high-level financial executive

who had invested in Ford because he “liked” their products without considering whether Ford

stock was undervalued. Similarly, some people “like” or “dislike” technologies without

considering whether the technologies are experiencing rapid rates of improvement.

For example, consider the problems with forecasting the future of mobile phones and the

error that McKinsey made in its infamous 1980 forecast. So-called cellular phones that reuse

the frequency spectrum in multiple “cells” became scientifically feasible in the 1940s and

technically feasible in the late 1970s when digital switching equipment enabled users to be

automatically switched between different base stations as the users moved between different

cells. How should we have thought about their economic feasibility in 1980 and thus their

expected diffusion by 2000? McKinsey’s forecast in the early 1980s expected one million

global users by the year 2000, presumably by asking people whether they wanted a mobile

phone. Since most people could not have “retrieved from memory” the type of future that

might emerge from a “mobile lifestyle” they would have sensibly been pessimistic about

18

Page 19: Exponential change: what drives it, what does it tell us about the future?

mobile phones. Thus, one lesson from this inaccurate forecast is that it is difficult to

understand user needs- particularly longer range ones.

However, we think there is a second important lesson from this inaccurate forecast and this

lesson is actionable as it has implications for the kinds of questions we should ask. McKinsey

should have focused on the fact that the costs of mobile phones and their services would

dramatically fall due to Moore’s Law. Furthermore, these costs would dramatically fall even

if mobile phones did not begin to diffuse because Moore’s Law was being driven by a wide

range of other electronic products. This would have caused McKinsey to reach completely

different conclusions and perhaps ask potential users a different set of questions. For

example, they could have asked whether people were interested in a free phone whose

subscription provides 100 minutes of talk time for less than 30$ a month, a situation that has

existed with mobile phones for many years.

Jump ahead to the year 2000 when many industry insiders believed that travel, location-

based, and other business-related services for mobile phones were a huge market that was

ready to take off. They believed this because these services (and GPS for automobiles) were

experiencing large rates of growth for the Internet and thus they were often discussed in the

media; this made them easy to retrieve from memory. Although these services are now

diffusing rapidly, it took many years before this happened and most of the hopeful suppliers

in 2000 have long since gone bankrupt. Here the lesson is that cognitive biases exist and just

as one can underestimate the long term effects of exponential improvements like those found

in Moore’s Law, one can overestimate their short term effects. In the year 2000 firms should

have been analyzing the levels of performance and cost needed in displays, microprocessor

and memory ICs, and networks before various types of mobile Internet content and

applications would become technically and economically feasible. This would have caused

them to be less optimistic about location-based services and instead first emphasized simpler

19

Page 20: Exponential change: what drives it, what does it tell us about the future?

applications such as ringing tones and wall paper that ended up diffusing long before more

sophisticated location services began to diffuse.

There are several key points here. First, when a technology is experiencing rapid

improvements in cost and performance and it has a large impact on a higher level system, the

rapid improvements in the technology can lead to large improvements in the cost and

performance of the higher-level system. Mobile phones have experienced dramatic

improvements in cost and performance through exponential improvements in ICs (and also

displays) and these and other (e.g., batteries) components still make up about 95% of a

phone’s cost. Thus, we can say much more about the future of a system by understanding its

components and the rate of improvements that these components are experiencing than by

using the learning curve, which focuses on the 5% of costs that are phone assembly costs.

Furthermore, analyzing the rates of improvements in a system’s components enable one to

analyze when a new technology might become economically feasible even before production

of the system begins, something that the learning curve can’t do.

Second, if the system or the production of the system benefits from increases in scale, as

some do (but mobile phones systems don’t), we can use data for both the system and its

components to analyze future costs and performance. This is relevant for technologies such as

new display technologies, solar cells, and wind turbines. Third, a requirement of this

approach is that we must understand the system and its components. This is a challenge for

even experienced engineers but we should not be surprised that better approaches involve

deeper understanding. Fourth, and most importantly, we must identify the technologies that

are or may experience exponential improvements in cost or performance (in particular rapid

ones) and their impact on higher-level systems. This is the purpose of this book.

1.5 Technologies Undergoing Rapid Improvements

Following a more detailed analysis of the drivers of exponential improvements in Part I, we

20

Page 21: Exponential change: what drives it, what does it tell us about the future?

use our new-found knowledge about these drivers to analyze a number of technologies that

are currently experiencing or expected to experience exponential improvements. Part of this

analysis shows that these technologies have experienced rapid improvements without

production, thus providing further evidence that something other than cumulative production

is driving these improvements.

Chapter 5 addresses ICs and the new electronics systems that improvements in ICs have

made and continue to make economically feasible. ICs have experienced dramatic

improvements in their performance and cost as feature sizes were reduced and these

improvements have enabled the emergence of and improvements in a wide variety of

electronic systems. Furthermore, these improvements are likely to continue for at least

another ten and probably 20-30 years for a variety of reasons and in combination with

improvements in other technologies, new forms of electronic systems are likely to become

economically feasible.

Chapter 6 addresses micro-electronic mechanical systems (MEMS): MEMS are small

machines that are fabricated using some of the same equipment and processes that are used to

fabricate ICs. One difference is that unlike ICs, whose inputs are electrical signals, the inputs

for MEMS also include pressure, temperature, gravity, magnetic fields, and biological

materials. While some types of MEMS such as small gears and motors do not benefit from

reductions in scale and thus are only appropriate when small size is demanded, some of them

do. For example, as their feature sizes are made smaller, mechanical resonators resonate at

higher frequencies, gas chromatographs and bio-electronic ICs become faster and more

sensitive, the resolution of “memjet” ink printers increase, and digital micro-mirrors and

optical switches become faster. One challenge is to develop a common set of materials,

processes and equipment for MEMS so that different ones are not needed for each

application.

21

Page 22: Exponential change: what drives it, what does it tell us about the future?

Chapter 7 takes the arguments about reductions in scale one step further and looks at

nanotechnology. While many analyses of nanotechnology seem to treat it as a non-analyzable

magical kind of technology, this chapter focuses on the phenomena that benefit from small

scale, the technologies that exploit these phenomena, and the steady improvements in the

performance and cost of grapheme, carbon nanotubes, other single atom thick materials,

quantum dots, nanoparticles, and nanofibers. These improvements are occurring largely

because scientists and engineers continue to create materials, including new classes of

materials, that benefit from small-scale phenomena. The large number of new classes of

materials that continue to be created suggests that nano-technology will have a large impact

on our world in the next 50 years.

Chapter 8 looks at new forms of lighting such as light-emitting diodes (LEDs) and organic

LEDs (OLEDs) and also at laser diodes, which are physically somewhat similar to LEDs.

Creating new materials and processes for them is the major driver of improvements in LEDs

and OLEDs and these new materials have enabled rapid increases in the luminosity per Watt

of LEDs and OLEDs along with improvements in size and flexibility. In combination with

improvements in ICs and other components, these improvements can enable a dramatic

change in the way that spaces are lighted. Improvements in lasers are also occurring partly

because of reductions in scale that are made possible by new processes and to some extent

new materials. In combination with improvements in MEMS and ICs, improvements in laser

diodes are also making new systems economically feasible; a good example is autonomous

vehicles.

Chapter 9 looks at new forms of displays such as 3D LCDs, OLED-based displays, and

holographic ones and the reductions in cost that have and continue to occur in them.

Improvements in them are driven by the creation of new materials, improvements in

components such as lasers and ICs for holographic displays, and increases in the scale of the

22

Page 23: Exponential change: what drives it, what does it tell us about the future?

substrate and equipment. These displays are fabricated on large substrate and then cut into

smaller displays for individual televisions and computer screens. These increases in scale

have enabled dramatic reductions in the cost of LCDs and these increases in scale are now

driving reductions in the cost of OLED-based displays and solar cells and will also do this for

new processes such as roll-to roll printing.

Chapter 10 analyzes several technologies within health care. These include bio-electronics

with a focus on bio-electronic ICs, flexible electronics, and DNA sequencers. Bio-electronic

ICs are MEMS that include micro-fluidic channels. Since these ICs benefit from reductions in

scale and these reductions in scale lag those of ICs by about 30 years, as the feature sizes

continue to be reduced many new types of products will emerge including point-of care

diagnostic equipment and artificial implants such as bionic eyes. Improvements in flexible

electronics, which are primarily driven by the creation of new materials, are also occurring

and making artificial implants more economically feasible. The third type of health care

technology that is experiencing exponential improvements is DNA sequencers. They also

benefit from reductions in scale and these reductions in scale are a major reason why the cost

of sequencing and synthesizing DNA has experienced exponential improvements in cost and

performance. Unlike bio-electronic ICs and MEMS, however, these reductions in scale have

also involved many changes in technology where processes similar to those used to

manufacture ICs are one of the competing technologies. Many believe that the exponential

improvements in DNA sequencers will continue and they will lead to dramatic changes in the

way drugs are discovered, new materials are created, and health care are done.

Chapters 11 and 12 consider two types of electronic systems that benefit from

improvements in “components. Chapter 11 analyzes the impact of better semiconductor

lasers, photodiodes, ICs, and optical-based MEMS on both wireline and wireless

telecommunication systems. Exponential improvements in these components enable

23

Page 24: Exponential change: what drives it, what does it tell us about the future?

exponential improvements in data rates, speeds, and in the efficient use of the frequency

spectrum. Chapter 12 focuses on the human-computer interfaces and how improvements in

ICs, CCD (charge coupled devices), and magnetic ones enable exponential improvements in

speech recognition, gesture-based interfaces, and neural ones that go beyond current

keyboard and touch-based ones.

Chapter 13 looks at superconductors, which have zero resistance and thus infinite

conductance at very low temperatures. The creation of new materials, including new classes

of superconducting materials has enabled steady increases in the critical temperatures,

currents, and magnetic fields for superconducting materials and we are now approaching

room temperatures in Antarctica. These superconductors are already used in magnetic

resonance imaging (MRI) systems and improvements are making superconductors

economically feasible in a broader set of applications such as computers (i.e., quantum

computers) and in generators, transformers and transmission cables for energy.

1.6 Who is this book for?

This book is for people interested in the future and in how to use knowledge about

technological trends to understand, design for, and succeed in the future. This includes R&D

managers, hi-tech marketing and business development managers, policy makers and

analysts, professors, entrepreneurs and employees of think tanks, governments, hi-tech firms,

and universities. Rates of improvement in specific technologies and an understanding of their

drivers can help us understand when these new technologies and systems composed of them

might become economically feasible. Firms can use such information to better understand

when they should fund R&D or introduce new products that involve a new technology. Policy

makers and analysts can use such information to think about whether technologies have a

large potential for improvement and how governments can promote further or more rapid

24

Page 25: Exponential change: what drives it, what does it tell us about the future?

improvements in them.

This book is of particular importance to those people who are trying to design new

“systems.” Technologies that experience rapid rates of improvement enable new

combinations of components to become economically feasible and these new combinations

enable higher order systems including new products and services, new forms of health care

systems, and even new forms of cities to become economically feasible. One cannot

understand the future of cities without understanding the technologies that are experiencing

rapid rates of improvement. Part II helps us understand the future of cities and the concluding

chapter will describe a future that is much different from the predominant viewpoint.

This book can also help us make better R&D policy. We believe that governments should

strongly fund basic and applied research in those technologies that have the potential for large

improvements in performance and cost and this book helps governments identify those

technologies. This viewpoint builds from the economic perspective that firms under invest in

basic and applied research because of large uncertainties and because they cannot appropriate

all the benefits from basic and applied research. By funding a broad range of technologies

that have the potential for large improvements in cost and performance, governments can

facilitate these improvements and then firms can commercialize these technologies as their

cost and performance comes closer to economic feasibility.

This viewpoint is different from one predominant viewpoint in which governments

subsidize demand or fund R&D for specific technologies in order to solve specific problems.

This has been the dominant approach for clean energy in which demand-based subsidies for

solar cells, wind turbines, and electric-based vehicles are common. Not only do we argue that

funding for R&D has a larger impact on improvements in cost and performance than do

subsidies for production, the rates of improvement for wind turbines (2% per year) and

batteries (5% per year) are very slow. This book suggests other approaches that can have a

25

Page 26: Exponential change: what drives it, what does it tell us about the future?

larger impact on the use of fossil fuels than can wind turbines and batteries and they will

likely become economically feasible before wind turbines and batteries will. For example, it

will take more than 75 years for the energy storage densities of batteries to reach that of

gasoline while autonomous vehicles will diffuse long before this and they can increase

vehicle speeds and thus fuel efficiency.

This is just one example of how technologies with rapid rates of improvements suggest

other approaches to reducing the use of fossil fuels in vehicles and to solving other problems

including ones of global importance. Technologies that are experiencing rapid rates of

improvement form a type of tool chest from which we can pull out technologies and combine

them into solutions to global problems. Not only does the current performance and cost of

these technologies provide us with useful tools here and now, their rapid rates of

improvement mean that better tools continue to emerge and we should be thinking about how

these better tools can help us solve global problems.

Finally, this book is also for young people. Young people have more at stake in the future

than anyone else and this book is written to help people think about their future and the future

of various systems It helps students think about solving global problems, where opportunities

may emerge and thus the technologies they should study and begin their careers. In particular,

it helps students understand the technologies that are undergoing rapid improvements and

what this means for higher-level systems. We live in a system-based world in which most

people are designing systems in which they do not need to design the components for those

systems, because someone else designs those components and often someone from a different

organization. Thus students need to understand those components that are undergoing rapid

improvements and why they are undergoing these improvements before they can conceive of

the possible ways to design the higher level systems. This book can help students do this.

26

Page 27: Exponential change: what drives it, what does it tell us about the future?

Part I

What drives exponential change?

Some technologies experience faster rates of improvements than do other technologies, or

what this book calls exponential improvements. Understanding the reasons for these faster

rates can help us find those technologies that are likely to experience rapid rates of

improvement in the future and when these new technologies might become economically

feasible. To understand these reasons, we have investigated a wide variety of technologies,

their rates of improvement, and the engineering literature’s assessment of these

improvements.

Table I.1 summarizes the rates of improvements for a number of technologies that we

investigated. Although many can be classified in a variety of ways, these technologies are

primarily organized into the transforming, storing, and transporting of energy, information,

and living organisms, which is consistent with some characterizations of engineering

systemsxi. Since a variety of performance measures are often relevant for a specific

technology, data was collected on multiple measures some of which are represented in

performance of basic functions per unit cost while others are in performance of functions per

mass or per volume.

Identifying the drivers, or mechanisms, for these improvements is highly problematic.

27

Page 28: Exponential change: what drives it, what does it tell us about the future?

Like any phenomenon, there are multiple reasons that can be organized into a variety of

ways, some of which are hierarchical. Our goal was to organize these drivers in a hierarchical

way such that high-level drivers are broader and more general than are lower level ones. For

example, high-level drivers include demand, novel combinations of components, and

government policies that promote innovation and competition, while low-level mechanisms

include detailed problem solving on a daily basis by engineers and scientists. Our goal is to

identify a set of mechanisms that lie between these low- and high level mechanisms and that

help us design better government policies and management strategies including a better

understanding of when new technologies become economically feasible.

The existence of straight lines without sudden jumps as in the S-curve aids us in our

search for the drivers of improvements and in our understanding of when new technologies

become economically feasible. It allows us to ignore the purported source of jumps in

performance during the early years of a technology and focus on the rather steady

accumulation of capabilities and knowledge that appear to form the basis of the straight lines.

This is particularly true for the orders of magnitude improvements where the later

improvements are of much higher magnitude on an absolute scale than are the earlier ones,

probably because the later ones benefit from the very large base of knowledge that has

accumulated over time. This steady accumulation of knowledge and capabilities guided our

search for the drivers and it is evident in the two mechanisms that we identified.

Our research indicates that improvements in performance and cost are largely driven by two

mechanisms: 1) creating materials to better exploit their physical phenomena; and 2)

geometric scaling. Some technologies directly experience improvements through these two

mechanisms while those consisting of higher-level “systems” indirectly experience them

through improvements in specific “components.” Chapter 2 focuses on the first mechanism

while Chapters 3 and 4 focus on the second mechanism, one for smaller and one for larger

28

Page 29: Exponential change: what drives it, what does it tell us about the future?

scale. All three chapters deal with the relationships between components and systems.

Our research also shows that exponential, i.e., very rapid, improvements are primarily

driven by a subset of these two mechanisms. Creating new materials (and processes for them)

can lead to rapid improvements in performance and cost when new classes of materials are

continuously being created and when the materials are used for microstructures such as in

diodes, lasers, transistors, and other P-N junctions (e.g., solar cells), although some

exceptions will be shown in Part I (e.g., glass fiber). For scale, technologies that benefit from

reductions in scale have experienced much more rapid improvements than have technologies

that benefit from increases in scale.

These rapid rates of improvement also lead to rapid rates of diffusion (which do follow an

S-curve). We argue that these rapid rates of diffusion are a direct results of the rapid rates of

improvement that are more common now in the second half than first half of the 20 th century.

Rapid rates of improvement lead to rapid rates of diffusion as the rapid rates of improvement

case the technologies to rapidly become economically feasible for a larger number of

customers and applications. For example, electronic products such as computers and mobile

phones have experienced much faster rates of diffusion than have electric or hybrid vehicles

because the former have experienced much more rapid rates of improvements than have the

latter.

29

Page 30: Exponential change: what drives it, what does it tell us about the future?

Table 1. Annual Rates of Improvement for Specific Technologies

Tech-

nology

Dimensions of

measure

Time

Period

%/

Year

Energy Transformation Technologies

Lighting Luminosity/Watt 1840-1985 4.5

LEDs Luminosity/Watt 1965-2008 31

Organic

LEDs

Luminosity/Watt 1987-2005 29

GaAs

Lasers

Power/length-bar 1987-2007 30

Photo-

sensors

Light sensitivity 1986-2008 18

Solar Cells Power/cost 1957-2003 16

Aircraft

engine

Gas pressure ratio 1943-1972 7

Thrust/weight-

fuel

1943-1972 11

Power of aircraft

engine

1927-1957 5

Piston

engines

Energy/mass 1896-1946 13

Electric

Motors

Energy/mass 1880-1993 3.5

Energy/volume 1890-1997 2.1

Energy Storage Technologies

Batteries Energy/volume 1882-2005 4

Energy/mass 1882-2005 4

Energy/unit cost 1950-2002 3.6

Capacitors Energy/cost 1945-2004 4

Energy/mass 1962-2004 17

Flywheels Energy/cost 1983-2004 18

Energy/mass 1975-2003 10

Energy Transport Technologies

Electricity

Trans-

mission

Energy transported

times distance

1890-2003 10

Energy transported

times distance/cost

1890-1990 2

Information Transformation Technologies

ICs Transistors/chip 1971-2011 38

30

Page 31: Exponential change: what drives it, what does it tell us about the future?

MEMS

Printing

Drops/second for

ink jet printer

1985-2009 61

Computers Instructions/ time 1945-2008 40

Instructions/time

and dollar

1945-2008 38

Liquid

Crystal

Displays

Square meters/cost 2001-2011 11

MRI 1/Resolution x

scan time

1949-2006 32

CT

Scanner

1/Resolution x unit

time

1971-2006 29

Organic

Transistors

Mobility 1994-2007 99

Information Storage Technologies

Magnetic

Tape

Bits/cost 1955-2004 40

Bits per/volume 1955-2004 10

Magnetic

Disk

Bits per/cost 1957-2004 39

Bits/volume 1957-2004 33

Optical

Disk

Bits/cost 1996-2004 40

Bits/volume 1996-2004 28

Information Transport Technologies

Wireline

Transport

Bits/time 1858-1927 35

Bits x distance/

cost

1858-2005 35

Wireless

Transport

Coverage density,

bits/area

1901-2007 37

Spectral efficiency,

bits/bandwidth

1901-2007 17

Bits /time 1895-2008 19

Living Organism Related Technologies

Biological

transfor-

mation

Genome

sequencing/cost

1965-2005 35

Concentration of

penicillin

1945-1980 17

U.S. agricultural

productivity (per

1948-2009 1.3

31

Page 32: Exponential change: what drives it, what does it tell us about the future?

input)

US corn

production/area

1945-2005 0.9

Transport

of

humans/

freight

Ratio of GDP to

transport sector

1880-2005 0.45

Aircraft

passengers times

speed

1926-1975 13

Materials Related Technologies

Load

Bearing

Strength to weight

ratio

1880-1980 1.6

Magnetic Magnetic strength 1930-1980 6.1

Magnetic

coercivity

8.1

Other Technologies

Machine

Tools

Accuracy 1775-1970 7.0

Machining speed 1900-1975 6.3

Labora-

tory

Cooling

Lowest

temperature

achieved

1880-1950 28

MEMS: micro-electronic mechanical systems; LEDs: Light Emitting Diodes; ICs: Integrated Circuits;

Magnetic Resonant Imaging; Source: xii

32

Page 33: Exponential change: what drives it, what does it tell us about the future?

Chapter 2

Creating Materials to Better Exploit Physical Phenomena

Most people have noticed that large numbers of new materials have emerged over the last

100 years and that many continue to emerge. For example, plastics have largely replaced

metals in most mechanical products and so called engineered materials have become the

norm as every type of material has been “engineered” in order to have certain characteristics.

To do this, materials are either added or removed or processes are tweaked in order to

improve some measure of performance where advances in science facilitate the addition and

removal of materials and tweaking of processes. These advances in science form a base of

knowledge for the phenomena and thus facilitate the creation of new materials that better

exploit the phenomena. The word “create” is used because scientists and engineers often

create materials that do not naturally exist (as opposed to finding them) and in doing so must

also create the processes for the materials.

But what might enable rapid rates of improvement that involve the creation of new

materials? As shown in this and other chapters, some technologies such as organic transistors,

magnetic coercivity, cutting machines, LEDs, and OLEDs have experienced very rapid rates

of improvements while others such as batteries and agriculture have not. Why? While strong

bases of scientific knowledge are important, they are certainly not the whole story and

probably not the main reason since stronger bases of knowledge probably exist for batteries

and agriculture than for organic transistors, LEDs, and OLEDs.

We believe that rapid rates of improvement reflect the scientific feasibility of many

materials for a particular technology and this scientific feasibility means that new materials

can be created if the proper processes and raw materials are known and used, which partly

depends on the levels of scientific knowledge. If these materials are scientifically feasible and

we can create them, the rates of improvement will probably be very rapid. If either they are

33

Page 34: Exponential change: what drives it, what does it tell us about the future?

not scientifically feasible, or we do not have the ability to create them, the rates of

improvement will be very slow or even non-existent.

A rapid rate of improvement during the early years of a technology suggests that there are

many materials that are scientifically feasible and that we are adept at creating them. This is

certainly the case with many electronic-related phenomena and technologies. Humans have

been able to create new forms of materials that have enabled dramatic improvements in the

performance and cost of microstructures such as diodes, transistors, other P-N junctions (e.g.,

solar cells), quantum wells or dots for lasers, and in general thin films for all of these

microstructures. This might be because the performance of these microstructures benefit from

small changes in the composition of materials and processes.

A second way to think about the number of materials that might be scientifically feasible is

in terms of classes of materials. Some improvements are from creating new classes of

materials (and processes for them) while other improvements are from modifying materials

(and processes) within a specific class. The more classes that are created to exploit a physical

phenomenon, the more modifications that can be done within a class of material and thus the

greater the possibility of having and sustaining rapid rates of improvement.

Creating some classes of materials is considered so important that it brings someone a

Nobel Prize. Nobel Prizes were received for the creation of organic conductors, including

ones for solar cells, displays, and transistors, and more recently for bio-luminescence and

quasi-crystals. Crystals can be thought of as a physical phenomenon that displays certain

characteristics and these characteristics are appropriate for certain applications. Quasi crystals

exhibit some of the characteristics of crystals in addition to new characteristics that are still

somewhat unknown. Thus, we can expect improvements in various measure of performance

over the next 50 to 100 years as engineers and scientists create quasi-crystals that combine

different types of materials often using different types of processes. This long term process of

34

Page 35: Exponential change: what drives it, what does it tell us about the future?

improvement will be facilitated by advances in our understanding of quasi-crystals and other

materials because advances in science will facilitate the search for and creation of new

materials that enable improvements in performance.

This chapter begins with materials that are used for mechanical applications followed by

those that are for electrical engineering, electronic, agriculture, and finally pharmaceutical

applications. Although this chapter focuses on materials for which the measures of

performance are well-defined, we recognize that all materials have multiple measures of

performance and it is often more about finding materials with a specific combination of

measures or a new measure of performance than with merely making improvements along a

single well-known measure of performance.

2.1 Mechanical Engineering Applications

A key measure of performance in many mechanical engineering applications is the ratio of

strength-to weight, or strength-to density where strength is measured in terms of resistance to

stretching and bending. High strength and low weight are of obvious importance to large

structures such as buildings and bridges and to transportation equipment such as automobiles

and aircraft. Without materials with higher strength-to weight ratios, we would not have

skyscrapers, suspension bridges, large aircraft, or the space station, and will not have exotic

new structures such as space elevators.

A report by the National Academy of Sciences concluded that scientists and engineers were

able to increase the strength to density ratio of materials by more than 10 times in the 19 th and

20th centuries. New forms of engineered materials such as composites have much higher

strength-to density ratios than do iron and steel and the search for these new materials still

continues in for example, carbon fiber. Engineers and scientists continue to create new types

of additives, weaves, and the processes for making them in a search for carbon fiber that have

higher strength-to weight ratios or higher performance along other measures than do current

35

Page 36: Exponential change: what drives it, what does it tell us about the future?

forms of carbon fiber.

These improvements in strength are one reason why engineers have been able to increase

the cutting speeds and thus reduce the machining time of turning metal (See Figure 2.1).

Carbon steel cutting tools were replaced successively by tungsten carbide, cermet, ceramic,

and diamond-based cutting tools where these new materials enabled increases in cutting

speeds and thus increases in machine output and where increasing the scale of the equipment

also played a role in these increases in speed. Unfortunately, as discussed in Chapter 4,

increasing the speed of loading and unloading is more difficult than increasing the cutting

speeds.

Figure 2.1. Improvements in Machining Times of Turned Parts

Source: American Machinist 1977. Metalworking: Yesterday and Tomorrow, November.

The most recent improvements in the strength of materials are coming from creating new

forms of carbon such as carbon nanotubes and graphene, which have strength-to density

ratios that are about 20 times better than carbon fibers. Although they are made from carbon

as are soot, graphite, and diamonds, they display different characteristics (including high

36

Page 37: Exponential change: what drives it, what does it tell us about the future?

conductivity) than do these other forms of carbon because of the way in which the carbon

atoms bond to each other. As discussed in Chapter 7, the challenges of creating these new

materials is inextricably linked to creating new processes and the current challenge is to find

ways to fabricate them for much lower cost, which involves some of the concepts that are

discussed in the following two chapters.

A second key measure of performance for materials is temperature resistance. Because

higher temperatures are needed for furnaces, smelters, and other manufacturing processes and

often lead to better performing engines and turbines, creating new materials that are resistant

to high temperatures has been a goal for scientists and engineers over hundreds if not

thousands of years. A report the by National Academy of Sciencesxiii concluded that scientists

and engineers were able to increase the operating temperature of engines from less than 200

degrees centigrade in 1900 to more than 1200 degrees by 1980.

Other material-related technologies such as polymers, man-made fibers, ceramics, and other

engineered materials also benefit from creating materials that better exploit physical

phenomena. It is difficult to present such data in figures or tables for these materials,

however, because it is difficult to summarize the improvements for a single measure of

performance. Many materials-related technologies have multiple measures of performance

and thus progress is often from finding materials that offer a new measure of performance in

addition to improvements in an existing measure of performance. For example, measures of

performance for man-made fibers include tensile strength, elastic recovery, modulus, and

moisture regain where different measures of performance and different combinations of them

are important for different applicationsxiv.

2.2 Energy Storage

Creating new materials that better exploit physical phenomena is also relevant to energy

storage devices such as batteries, flywheels, and capacitors. Since the first batteries were

37

Page 38: Exponential change: what drives it, what does it tell us about the future?

constructed in the early 19th century, engineers and scientists have improved their energy (See

Figure 2.2) and power storage densities by creating and combining the right materials where

many of these new materials required new processes. Batteries with higher energy and power

densities store more energy per weight or volume and provide more power per weight or

volume respectively than do ones with lower energy and power densities. They also often

have lower costs per unit of energy since cost is often a function of volume or weight for

batteries. Improving these energy and power densities has caused engineers and scientists to a

search for materials with high reactivity for the cathode and low reactivity for the anode

along with higher current carrying capacity, low weight, and ease of processing.

Figure 2.2 Improvements in Energy Storage Density

Source: H Koh and C Magee, A Functional Approach for Studying Technological Progress: Extension to Energy

Technology, Technological Forecasting & Social Change 75 (2008) 735–758

Creating these materials has enabled engineers and scientists to improve the energy storage

densities of batteries by about 10 times in the last 100 years. Improvements in the last few

decades have come from using completely new materials such as Lithium and making small

38

Page 39: Exponential change: what drives it, what does it tell us about the future?

changes to the particular combination of Lithium and other materials. This has led to a

doubling of energy densities for Li-ion batteries in the last 15 years and some expect a similar

doubling to occur in the next 15 years from using modified forms of lithium such as Li-Air

(See Figure 2.3)xv.

Figure 2.3 Recent Improvements in Energy Density of Batteries

Source: Tarascon, J. 2009. Batteries for Transportation Now and In the Future, presented at Energy 2050,

Stockholm, Sweden, October 19-20.

However, not only are lithium-ion batteries more expensive than are lead batteries on a

energy storage density basis, their energy densities are about 1/30 the levels found in gasoline

and even if the rates of improvements in Figure 2.3 or those suggested by recent C’s

development’sxvi continue (both about 5%), it will take 75 years before the energy storage

density of Li-ion batteries equal those of gasoline. Poor energy storage densities lead to a

vicious cycle of heavier cars requiring more batteries and more batteries leading to heavier

cars. This should make everyone very pessimistic about battery-based electric vehicles, even

if we dramatically increase research funding for them. Billions of dollars have been spent on

battery storage technologies over the last 100 years because they have been used in

39

Page 40: Exponential change: what drives it, what does it tell us about the future?

automobiles and electronic products during these 100 years. Unless scientists and engineers

find completely new classes of materials or utilize new ones with higher densities (but with

other problems) such as sodium-ions that reportedly have densities has high as 600 Wh/kgxvii,

it is unlikely that energy storage densities equivalent to gasoline will ever be achieved.

Some expect that the faster rate of improvement in energy storage densities for flywheels

and capacitors will cause their energy and power storage densities to exceed that of lithium-

ion batteries sometime in the near future. Like batteries, engineers and scientists have

improved their energy and power storage densities by creating new materials with the

appropriate properties; they improved the energy storage density for flywheels by 15 times in

the last 30 years and for capacitors by 1000 times in the last 40 years. If the trends shown in

Figure 2.2 continue, flywheels and capacitors will have a higher energy storage density than

batteries within 10 and 30 years respectively. One way that higher energy densities for

flywheels have been achieved is by using carbon fiber and other engineered materials with

high strength-to density ratios. As discussed in Chapter 7, one reason these trends might

continue is that carbon nanotubes and graphene offer substantially higher energy storage

densities for flywheels and capacitors respectively than are available with existing ones.

As an aside, people sometimes make fun of the efforts to create nuclear cars or airplanes.

One should recognize that these efforts were motivated by the extremely high energy and

power storage densities of these technologies, levels that are 10,000 times higher than those

found in gasolinexviii. Since costs are often related to size, these high energy and power

storage densities might have led to much lower costs for nuclear than gasoline propulsion in

automobiles. Of course they didn’t, but the motivation was correct. Similarly one should

remember that the concept of an internal combustion engine is based on a controlled

explosion; it’s just that we can control these explosions while we can’t control nuclear

reactions to the levels demanded by the public.

40

Page 41: Exponential change: what drives it, what does it tell us about the future?

2.4 Magnetic Materials

Creating new materials that better exploit physical phenomena is also relevant to magnetic

materials of which rapid improvements have been made in at least two measures of

performance. Coercivity represents the magnetic field required to reduce the magnetization

(in Oersteds or amperes per meter) while the “energy product” (in Mega Gauss Oersted)

represents the density of magnetic energy. Coercivity was improved by about 50 times

between the 1930s and 1980s (See Figure 2.4) and energy product was improved by about 50

times improvement between 1920 and 2010 (See Figure 2.5)xix; coercivity was subsequently

improved by about 100 times since 1980 (see Chapter 3).

Figure 2.4 Improvements in Coercivity (amps/meter)

Source: NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press

41

Page 42: Exponential change: what drives it, what does it tell us about the future?

Figure 2.5 Improvements in Energy Product for TDK

These improvements were achieved by creating new forms of magnetic materials such as

steel alloys, barium hexa-ferrites (or ferrites for short), and most recently rare earth ones (See

Figure 2.4). The steel alloys are sometimes called alnicos for their combinations of

aluminum, nickel, and cobalt. Rare earth elements are those that lie at the bottom of the

periodic table and their magnets contain various combinations of these elements along with

Manganese, Iron, Cobalt, and Nickel. As an aside, rare earth metals are not as rare as their

name implies. They are as abundant as are lead and mercury in the earth’s crust and they are

much more abundant than are gold and silver. It is China’s monopoly on their production,

primarily due to environmental restrictions in other countries, that makes them seem rarexx.

These improvements are relevant for electric motors, electrical generators, and for magnetic

storage. The output of a motor or generator directly depends on the energy product, i.e.,

magnetic energy density, of the magnetic windings while fast switching in magnetic disks or

storage requires both high coercivity and energy productxxi. As discussed in Chapter 3,

42

Page 43: Exponential change: what drives it, what does it tell us about the future?

improvements in coercivity have been necessary to achieve improvements in the areal

recording density of platters and tape.

2.5 Electronic applications

Creating new materials that better exploit physical phenomena (along with discovering new

phenomena) has been essential to the dramatic improvements in electronics that we have

experienced during the second half of the 20th century. The most important class of these

materials has been semiconductors, which fall between conductors and insulators in that they

only conduct under certain conditions. Semiconductor materials include silicon or germanium

or combinations of so-called III-IV materials such as aluminum and phosphorus, gallium and

arsenide, and indium and antimony. A key measure of performance for them is mobility; this

determines the speed with which electrons and holes (absence of electrons) can pass through

them and thus the speed with which transistors switch. Improvements in mobility along with

reductions in scale (discussed in Chapter 3) have been steadily achieved where silicon is the

most widely used material for transistors followed by gallium-arsenide.

Other materials are also important for transistors and for making the connections between

them in an integrated circuit (IC). Although for many years ICs consisted of only seven

elements, recent efforts to continue reductions in scale since the 1990s have required

engineers and scientists to create new materials for interconnect and insulators; examples of

these new materials include copper for interconnect that has largely replaced aluminum.

These efforts still continue as further reductions in feature size bring new challenges and

require more radical solutions.

New types of semiconductor materials have also been created to exploit other physical

phenomenon such as electroluminescence, optical amplification based on the stimulated

emission of photons (i.e., laser), and photovoltaic and thus improve the relevant measure of

performance by several orders of magnitude. As discussed in more detail in Part II, engineers

43

Page 44: Exponential change: what drives it, what does it tell us about the future?

and scientists have improved the luminosity per Watt in LEDs by finding new combinations

of semiconducting materials that better exploit the phenomenon of electroluminescence; these

include new combinations of gallium, arsenide, phosphorus, indium, and selenium. Many of

the improvements in semiconductor LEDs also led to improvements in semiconductor lasers,

due to the similarities between them. These improvements are usually measured in power

output per volume and also depend on defect free lenses and on better materials for removing

heat from the lasing area. As also discussed in Part II, engineers and scientists have also

improved the light sensitivity of photosensors and the efficiency of solar cells by finding

semiconducting materials and processes for them that capture more of the incoming light.

Each of these new technologies is gradually becoming economically feasible because

engineers and scientists are improving the relevant measures of performance at a rapid and

steady rate by creating the relevant new materials and the processes for making these

materials.

Three final examples of creating materials to better exploit physical phenomena in

electronics can be found in organic transistors (discussed in Chapter 5), superconductivity

(addressed in Chapter 13) and optical fiber. Optical losses in glass fiber have been reduced

(Figure 2.6) by improving the purity of the glass and its crystalline structure, doping it with

various impurities, and creating the processes for doing this. For example, researchers at

American glass maker Corning demonstrated a fiber with 17 dB/km attenuation by doping

silica glass with titanium. A few years later they produced a fiber with only 4 dB/km

attenuation using germanium dioxide as the core dopant. In 1981, General Electric produced

fused quartz ingots that could be drawn into fiber optic strands 25 miles (40 km) longxxii. As

an aside, Figure 2.6 is a rare example of a technology whose improvements have slowed

considerable and not experienced many improvements over the last 20 years. But as

discussed in Part II, improvements in bandwidth and speed have continued and no limits are

44

Page 45: Exponential change: what drives it, what does it tell us about the future?

in sight.

Figure 2.6 Reductions in Optical Loss (decibels per km) of Optical Fibers

Source: NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press

2.6 Agricultural Applications

Examples of creating materials to better exploit physical phenomena also exist in

agriculture. Improvements in the yield (e.g., in bushels) per acre have been occurring for

many years and for many crops. For example, between 1945 and 2005, yields in the U.S.

were increased by more than three times for corn and wheat, two times for soybeans, and 1.5

times for ricexxiii. These improvements were driven by the creation of many new materials

albeit they are biological ones; these include new types of seeds, fertilizers, pesticides, and

herbicides. Better seeds come from breeding just as better animals do and new classes of

seeds have contributed towards the improvements in these yields. Consider corn. While

“open pollinated” seeds were used in the 19th century, double cross, single cross and now

biotech (i.e., genetically modified organisms)-based seeds have been developed in the 20 th

45

Page 46: Exponential change: what drives it, what does it tell us about the future?

centuryxxiv. Better fertilizers come from finding better forms and sources of the three major

nutrients: nitrogen, phosphorus and potassium. Nitrogen aids vigorous vegetative growth,

phosphorus is needed for root growth and vigor, and potassium helps increase plant

metabolism and disease resistance. Better pesticides and herbicides come from finding

specific chemicals that selectively kill some insects and weeds and not others.

One of the first and still most important sources and forms of nitrogen-based fertilizer is

ammonia, which consists of one part nitrogen and one part hydrogen. In spite of the large

amounts of nitrogen in the air, it was not until Fritz Haber and Carl Bosch developed the

process for transforming air into ammonia that inexpensive fertilizers became available to

farmers. Along with increasing the scale of these processes, which is discussed in Chapter 4,

subsequent improvements have come in the form of new sources and forms of nitrogen.

Natural gas has become the dominant source of ammonia and thus the price of ammonia

often rises and falls with the price of natural gas, and also oil.

The high price of ammonia-based fertilizers is one reason that plant biologists are trying to

develop seeds that provide high crop yields without using fertilizers; reducing or eliminating

the need for pesticides, herbicides, or even water are also goals. This is because they are

expensive and often bad for the environment, water is becoming scarcer, and insects often

become immune to pesticides. Thus, new measures of performance are emerging for the

seeds and to understand the degree of success with these seeds, data must be gathered on the

rates of improvement for these new measures of performance.

These new demands increase challenges for plant biologists and for providing enough food

for the planet. The latter issue is a highly contentious one where perspectives strongly differ.

On the one hand, several crops in the U.S. have recently experienced few improvements or

even declines in crop yieldxxv. These include sorghum, rye, sugarcane and oats. On the other

hand, since other countries, particularly developing ones, have much lower crop yields than

46

Page 47: Exponential change: what drives it, what does it tell us about the future?

do the U.S., merely bringing the levels of crop yield in these countries up to those found in

the U.S. and other developed countries can increase global production by several times.

Moreover, further increases in crop yield in the U.S. and developed countries could provide

further opportunities for the rest of the world and one’s view towards these potential increases

largely determines one’s level of optimism about global food production meeting population

increases.

One thing we can say is that improvements in crop yield are not occurring at a rapid pace or

what this book calls exponential rates. The rates of improvement for crop yields are much

slower than what is found in other technologies that are addressed in subsequent chapters.

2.7 Pharmaceutical Applicationsxxvi

The pharmaceutical industry is all about creating materials that exploit physical

phenomena. In this case, it is about creating chemical compounds that fight or provide

immunity from diseases. Scientists first struggle to find appropriate chemical compounds,

often by trying thousands of different naturally occurring compounds until they find one that

has a positive impact on a disease. Once they find an appropriate compound, firms then

struggle to isolate a compound and incrementally increase its concentration because purified

drugs are often more reliable and predictable. This was done with morphine from opium

poppies, cocaine from coca leaves, nicotine from tobacco, quinine from cinchona, salicyclic

acid from willow bark, and penicillin from ascomycetous fungi.

Scientists may also modify the naturally occurring substance in order to produce new

substances that are more powerful or have fewer side effects than do the naturally occurring

compounds. For example, scientists created acetyl salicylic acid (aspirin) from salicylic acid

and diacetyle morphine (heroin) from morphine. Similarly, they created xylocaine,

amylocatine and procaine from cocaine, which are widely used as anesthetics.

Now scientists are trying to create drugs that fight or provide immunity from diseases

47

Page 48: Exponential change: what drives it, what does it tell us about the future?

through a rational understanding of the human body. While chemical compounds derived

from natural sources together with their synthetic variants account for about 70% of the drugs

in modern medicine, the discovery of vitamins and the identification of hormones like insulin

has led to optimism in create drugs through a rational understanding of the human body

through fields of physiology and molecular biology. Vitamins were identified through an

understanding of the human body and have been synthesized since the middle of the 20 th

century.

Recombinant DNA technology was used by Genentech to modify Escherichia coli bacteria

to produce human insulin in 1978 and this event is often defined as the beginning of the

biotechnology revolution. Prior to the development of this technique, insulin was extracted

from the pancreas glands of cattle, pigs, and other farm animals. Genentech researchers

produced artificial genes for each of the two protein chains that comprise the insulin

molecule. Second, they inserted them into plasmids, which are a group of genes that are

activated by lactose. Third, they inserted the recombinant plasmids into Escherichia coli

bacteria, which were "induced to produce human insulin. Many hope that the successful

mapping of the human genome and the falling cost of DNA sequencers will enable more

examples of synthetically producing drugs through a rational understanding of the human

body.

2.8 Discussion

Creating materials to better exploit physical phenomena is an important mechanism for

improving the performance and cost of technologies and there are several common themes

about this source of exponential improvements. First, many new materials are created in

laboratories and not in factories and thus production is not needed to make many of the

improvements discussed in this chapter. This was the case with load bearing, temperature

48

Page 49: Exponential change: what drives it, what does it tell us about the future?

resistant, and magnetic materials, batteries and other storage devices, transistors, LEDs,

OLEDs, organic transistors, solar cells, superconductors, seeds, fertilizers, herbicides, and

pesticides. University scientists and engineers create these materials because creating them

helps their careers in terms of publications, grant money, promotions, and patents. Even

corporate scientists and engineers create these materials for some of the same reasons since

they are often evaluated in a similar way.

Second, the fact that these materials are created in laboratories means that this creation has

occurred even though some of these technologies have never been produced on a large scale.

This is certainly the case with new technologies such as organic transistors, OLEDs, and

superconductors in which the modern system of R&D and laboratories is creating these new

materials for the reasons given in the previous paragraph. This suggests that new materials

could have been created for many of the other technologies covered in this chapter even

without production. This conclusion has obvious implications for policy in that subsidies for

R&D are probably a more effective stimulus for creating these new materials than are

subsidies for production, which is the current emphasis in for example clean energy.

Third, these two conclusions are also consistent with the fact that most of the performance

trajectories display relatively straight lines. While some argue that the improvements in

performance accelerate as the technologies are commercialized and as demand for them

increases, this chapter’s analysis suggests that this does not occur. Instead, the relatively

straight lines for these performance trajectories suggests that demand is having a different

impact on performance, one that is not fully understood. It could be that increases in demand

prevent the rate of improvements from declining as diminishing returns from research would

be expected to occur. It is certainly not the case that a slowdown in the rate of improvement

in the old technology causes research funds to move to the new technology. Evidence that

multiple technologies are being simultaneously pursued is a better explanation in the data for

49

Page 50: Exponential change: what drives it, what does it tell us about the future?

organic materials, energy storage densities, magnetic materials, and corn yield.

Fourth, the relatively straight lines can help us understand the future rate of improvement

and when the technology might become economically feasible for specific applications. One

can compare the new technology with existing ones along the main and other measures of

performance in order to understand the rate at which the new technology might become

economically feasible for these specific applications. In doing this, it is important to identify

all the measures of performance as many technologies are evaluated along multiple measures

and new materials often succeed because they have advantages along a new measure of

performance. Most technologies have multiple measures of performance and finding the right

combination of them is often the challenge. This challenge is exacerbated by the fact that

historical trends have not been plotted for most measures of performance and even for most

technologies.

Fifth, some of these technologies experience more rapid rates of improvements than do

other technologies. Most of these technologies are electronic related one such as LEDs,

OLEDs, lasers, organic transistors, and glass fiber in which improvements are apparently

easy to make. This is perhaps because new materials and processes are easily created and

perhaps because small changes in materials and processes have a strong impact on the

performance of these materials. The latter is perhaps because small changes in materials have

a large impact on the crystal lattice structure of the materials and thus facilitate the

construction of microstructures such as diodes, transistors, other P-N junctions (e.g., solar

cells), quantum wells or dots for lasers, and in general thin films for these microstructures

where the performance of these microstructures benefit from small changes in the

composition of materials and processes.

50

Page 51: Exponential change: what drives it, what does it tell us about the future?

51

Page 52: Exponential change: what drives it, what does it tell us about the future?

Chapter 3

Geometric Scaling: Reductions in Scale

Some technologies benefit from reductions in scale and these technologies have

experienced some of the most rapid improvements in performance and cost in the history of

humans. The concept of geometric scaling helps us understand when technologies benefit

from reductions in physical scale, primarily by focusing on the relationship between the

geometry of a technology, the scale of it, and the physical laws that govern it. Integrated

circuits (ICs) and magnetic storage benefit from reductions in scale because the rules that

govern their operation define performance in terms of smaller scale. Placing more transistors,

memory cells, or magnetic storage regions in a certain area increases the speed and

functionality and reduces both the power consumption and size of the final product, which

are typically considered improvements in performance for most electronic products (they also

lead to lower material, equipment, and transportation costs). The combination of both

increased performance and reduced costs as size is reduced has led to very rapid

improvements in the performance to cost ratio of many electronic components and the

electronic systems that are composed of these components. For example, three orders of

magnitude reductions in transistor length have led to about nine orders of magnitude

improvements in the cost of transistors on a per transistor basis.

Richard Feynman is sometimes credited with predicting these advances in his famous 1959

speech “There's Plenty of Room at the Bottom: An Invitation to Enter a New Field of

Physics.” While he was primarily referring to the field of physics and where physicists

should place their emphasis, technology has also moved in the same direction partly because

physicists and other scientists have advanced our understanding of small-scale phenomenon.

This improved understanding along with the application of this understanding to ICs and

52

Page 53: Exponential change: what drives it, what does it tell us about the future?

magnetic storage is now enabling us to benefit from reductions in other scale such as MEMS

(micro-electronic mechanical systems), bio-electronic ICs, and more generally

nanotechnology, which are addressed in Chapters 6, 7, and 10.

3.1 Magnetic Hard Disks

Hard disks are one type of magnetic storage. Engineers and scientists have been reducing

the scale of features on magnetic hard disks (and tape) for the last 60 years, which has

enabled rapid increases in the areal recording density of these disks (and tape), as shown in

Figure 3.1, and rapid decreases in the price of them (See Figure 3.2). Hard disk assemblies

consist of a platter, a read-write head that is connected to an actuator, and input-output

connectors. Writing involves magnetizing a specific region and reading involves sensing a

region’s magnetic field. Key features include the size of the magnetic “domains” on a platter

that store a single bit, of the read and write elements on the read-write head, and of the

spacing between the platter and read-write heads.

Figure 3.1 Improvements in Areal Recording Density for Magnetic Hard Disk Drives

53

Page 54: Exponential change: what drives it, what does it tell us about the future?

Source: Yoon Y, 2010. Nano-Tribology of Discrete Track Recording Media, Unpublished PhD

Dissertation, University of California, San Diego

Figure 3.2 Falling Price ($/GByte) of Hard Disk Drives

Source: Yoon Y, 2010. Nano-Tribology of Discrete Track Recording Media, Unpublished PhD

Dissertation, University of California, San Diego

54

Page 55: Exponential change: what drives it, what does it tell us about the future?

Reducing the scale of these features has required better process control over specific

measures, the use of new scientific principles, and the creation of materials that better exploit

these scientific principles. Improvements in sputtering equipment enabled better consistency

along with reductions in the size of magnetic domains on a platter. Improvements in

semiconductor processing technology enabled smaller domains to be sensed by a magnet in a

read-write head in which the magnet is “shielded” from all but a very small area at any given

time. Second, creating new materials that better exploit these principles have also been

necessary for the reductions in feature sizes to occur. For example, the creation of materials

with high coercivity and energy product, which were covered in Chapter 2, contributed

towards reductions in feature size and thus towards increases in magnetic recording density

(See Figure 3.3). Third, changing from electromagnetic induction to magnetoresistance and

most recently to spintronics has also been necessary to continue a reduction in feature sizes.

Spintronics exploits both the intrinsic spin of the electron and its associated magnetic

moment and is more generally called giant magnetoresistance. Its discoverers, Albert Fert and

Peter Grünberg, received the 2007 Nobel Prize in physics.

Figure 3.3 Increases in Coercivity (1000s of Amps/m) were Necessary to Achieve Increases

in Recording Density (Mbits per in2)

55

Page 56: Exponential change: what drives it, what does it tell us about the future?

Source: www1.hgst.com/hdd/technolo/overview/chart11.html

These improvements in magnetic recording density have also contributed towards increases

in the capacity of hard disk drives and thus the emergence of smaller diameter disk drives

such as 5.25, 3.5, 2.5, 1.8, 1.0, and 0.85-inch ones, which have obvious advantages for

portable computers and other products. While others have characterized the initial diffusion

of smaller disk drives in terms of their inferior capacity and low-end customers where the

demand for these small disk drives from low-end customers drove improvements in their

capacity, the ability to rapidly increase the magnetic recording density on a platter meant that

the emergence of smaller disk drives was inevitable given the benefits of small size in final

products.

For example, Clayton Christensen, author of The Innovator’s Dilemma, and his co-authors

in the Innovator’s DNA argue that the key to finding so-called disruptive technologies is to

find low-end innovations through a process of associating, observing, experimenting,

questioning, and networking where their model of “disruptive change” assumes that rapid

improvements will naturally emerge once demand for the low-end innovations increase. By

focusing on finding low-end innovations, Christensen and his colleagues ignore the fact that

56

Page 57: Exponential change: what drives it, what does it tell us about the future?

some technologies experience more rapid improvements than do others and these

technologies are more likely to lead to both the emergence of low-end innovations and the

replacement of a dominant technology by the low-end innovation.

Similar arguments can be made for magnetic tape where smaller tape players replaced the

larger dominant ones and where these smaller tape players are characterized as disruptive

innovations by Christensen. Again, while his theory focuses on firms finding low-end inferior

innovations for low-end customers where a demand for them drive improvements, the ability

to rapidly increase the magnetic recording density of tape meant that the emergence and

diffusion of smaller tape players such as the Sony Walkman were inevitable. The main

difference between the stories of hard disks and magnetic tape is that incumbents, Sony and

Philips, managed the transition to smaller tape players while many incumbents (an exception

was IBM) did not do so with disk drivesxxvii. Nevertheless, this book’s key message holds for

both of them: when exponential improvements are occurring, change will occur and this

change can create opportunities for both incumbents and new entrants.

3.2 Integrated Circuits

Integrated circuits (ICs) combine multiple transistors and other electrical devices such as

capacitors and resistors on a single IC chip. Reductions in the size of transistors and other

features have enabled increases in the density of transistors on a chip. Combined with

increases in the size of a chip (often called die size), which required reductions in defect

density, the increased density of transistors on a chip led to increases in the number of

transistor per chip, which is now called Moore’s Law (See Figure 3.4) Increasing the number

of transistors on a chip leads to greater functionality; this could be greater processing

capability, more memory capacity, or more pixels for still or video camera chips.

Furthermore, reducing the feature sizes also reduces the time and power needed for individual

57

Page 58: Exponential change: what drives it, what does it tell us about the future?

transistors and memory cells to switch. Faster switching time leads to faster processing time

for microprocessors and other ICs and to faster access times for memory chips. The

combination of increased performance and lower costs from reductions in scale and the

ability to achieve large reductions in scale have led to increases in the number of transistors

per chip of more than a billion times over 40 yearsxxviii.

Figure 3.4. Moore’s Law

Source: Wikipedia

58

Page 59: Exponential change: what drives it, what does it tell us about the future?

Reductions in feature size come from improvements in processes and the equipment that

are used in the processes. ICs are manufactured by depositing multiple layers of materials on

a silicon wafer, forming patterns in these layers with various types of equipment such as

photolithographic equipment, and finally slicing the wafer into multiple IC chips. The

patterns on these chips define the location of transistors and the measures for them. A key

measure is the gate length of a transistor and reductions in this gate length depend on

improvements in photolithographic and etching equipment.

Photolithographic equipment shines light through a mask and onto a wafer that has been

coated with photosensitive material called “resist.” Depending on the type of resist, the area

of the materials that is exposed to the light is either removed or not during a subsequent

etching process. Either way, the light changes the etching rate of the material and thus causes

different rates of etching between the areas that were exposed or not exposed to the light. By

repeating this process for each material that is deposited on the IC wafer, patterns are formed

in each layer of material.

Improvements in the minimum feature size are shown in Figure 3.5. For many years these

improvements came from better control over the mask, light source, and photosensitive

material. As the minimum feature size reached that of the wavelength of visible light,

however, new solutions have been needed. First, ultraviolet light sources replaced visible

ones and within ultraviolet light, there has been a move from mercury lamps to Krypton-

Flouride lasers and most recently to Argon Fluoride lasers. Second, the emerging “gap”

between the wavelength of the light source and the smallest feature sizes (See Figure 3.6)

means that sophisticated techniques are needed to correct for the problems encountered when

feature size is smaller than the wavelength of light. These include sophisticated lenses to

focus the light in a smaller area than the wavelength and both sophisticated software and

59

Page 60: Exponential change: what drives it, what does it tell us about the future?

super-computer processing capability to compensate for the distortions that come from these

lenses.

Figure 3.5 Reductions in Minimum Feature Size (nm) of ICs

Sources: ICKnowledge and authors’ analysis

Figure 3.6 Gap between Wavelength of Light and Minimum Feature Size

60

Page 61: Exponential change: what drives it, what does it tell us about the future?

Source: http://www.soccentral.com/results.asp?CatID=488&EntryID=30894

Similar arguments can be made for other types of ICs such as logic ICs, memory ICs,

application specific ICs (ASICs), and camera chips. Within memory, they can be told for

RAM (random access memory), dynamic RAM (DRAM), static RAM (SRAM), ROM (read

only memory), programmable ROM (PROM), erasable PROM (ERPOM), and electrically

erasable PROM (EEPROM), and flash memory where smaller feature sizes enabled a greater

storage density. For camera chips, reducing the minimum feature size of CCDs (charge-

coupled devices) or CMOS (complementary metal oxide semiconductor)-based camera ICs

enables smaller pixels and thus a larger number of them in an area (See Figure 3.7). These

61

Page 62: Exponential change: what drives it, what does it tell us about the future?

improvements are what enable the larger pixel cameras that we have in our camera phones.

Less well known is that these reductions in the scale of the pixels requires increases in the

sensitivity of the pixel. The reason is that smaller pixels receive less light and thus they must

absorb a greater amount of the available light in order for the pixel to effectively register the

light. As shown in Figure 3.7, steady improvements in both pixel size and sensitivity have

been achieved such that both are 50 times better than they were 25 years ago. These

improvements are primarily from new materials and new structures in the image sensors.

Figure 3.7. Improvements in Pixel Size and Sensitivity of Camera ICs

Source: T. Suzuki, “Challenges of Image-Sensor Development”, ISSCC, 2010

Improvements in power electronic ICs have also occurred, albeit at a much slower rate

than have camera and other ICs. These improvements have played an important role in

number of applications including the electrification of the automobile and now wireless

power. Wireless power can enable devices to access power without a physical connection

62

Page 63: Exponential change: what drives it, what does it tell us about the future?

between the device and a power source.

3.3 Moore’s Law and New Opportunities

These improvements in minimum feature size and the number of transistors per chip have

and continue to enable new forms of both ICs and electronic products to emerge and

improvements in them to occur. More specifically, increases in the number of transistors per

chip have caused new ways to organize transistors to emerge over the last 50 years. When ICs

were first introduced in the early 1960s, custom and logic chips were the only two

alternatives. Using custom chips in an electronic system meant high performance but also

high development cost while using standard logic chips meant both the development cost and

performance were much lower. Logic chips perform simple logic functions where the

mathematics of Boolean Logic is used to combine these simple logic gates into more complex

electronic circuits such as those that add and multiply. As the number of transistors per chip

was increased, it became possible for individual logic chips to perform more complex

functions such as adding, multiplying, and other mathematical functions.

Further increases in the number of transistors per chip made it possible by the early 1970s

to design a computer on a chip, or what we now call a microprocessor, and to introduce

memory ICs. Because these microprocessors are programmable, electronic systems built from

them have lower development costs than do those built using either custom or logic ICs. But

they required a certain number of transistors on a chip before all of the relevant functions

could be placed on a single IC chip. Similar arguments can be made for memory ICs. They

required a certain number of transistors on a chip before a reasonable number of memory bits

could be placed on a single IC chip. The first memory ICs were random access memory

(RAM) and they had 1024 bits. Party to store programs, new forms of memory such as ROM,

PROM, EPROM, and EEPROM emerged.

Improvements in microprocessors and memory enabled new forms of electronic systems to

63

Page 64: Exponential change: what drives it, what does it tell us about the future?

emerge with personal computers (PCs) being the most famous example. Although the first

applications for microprocessors were specialized electronic equipment for which the custom

design was uneconomical, improvements in microprocessors and memory gradually made

PCs economically feasible from the late 1970s. Similar things happened for mini-computers

in the 1960s when improvements in logic ICs made mini-computers economically feasible

and in general led to improvements in overall computing speed.

As one computer designer argued, by the late 1940s computer designers had recognized that

“architectural tricks could not lower the cost of a basic computer; low cost computing had to

wait for low cost logic”xxix, which mostly came in the form of better ICs. For example, an

order of magnitude improvement in the numbers of transistors per chip about every seven

years led to similar levels of improvements in computations per second and per kilowatt hour

of computers (See Figure 3.8)xxx. In addition to the emergence of new forms of computers,

improvements in ICs also led to the emergence and improvements of other systems such as

computer aided tomography, magnetic resonance imaging, video game consoles, servers,

routers, mobile phones, control systems (for machine tools, aircraft, ships, and automobiles),

MP3 players, and tablet computers. For medical equipment, better algorithms, ICs and the

computers they enabled better image reconstruction for these equipment. Improvements in

magnetic resonant imaging also depend on more powerful magnets, which depend on the

improvements in magnetic strength that come from creating new materials such as those from

rare earths.

Figure 3.8 Improvements in Computations per Power Consumed (Kilowatt Hour)

64

Page 65: Exponential change: what drives it, what does it tell us about the future?

Source: (Koomey et al, 2011)

Returning to the impact of Moore’s Law on the organization of transistors on ICs, in the

early 1980s, so-called application specific ICs (ASICs), which are semi-custom ICs, began to

diffuse as the number of transistors per chip made them economically advantageous over

logic chips and to some extent microprocessors. Increases in the number of transistors per

chip made them economically advantageous in two ways. First, the increasing numbers of

transistors on a chip made it difficult to design logic chips that both used the full extent of

integration possible from Moore’s Law and that still could be considered “general-purpose

ICs”. Second, the increasing number of transistors on a chip has caused development costs of

logic and other ICs to rise and to rise faster than are improvements in computer-aided design

65

Page 66: Exponential change: what drives it, what does it tell us about the future?

toolsxxxi. Like microprocessors and memory, the emergence of these ASICs enabled even

lower-volume electronic systems to use complex ICs without driving up their cost.

In the early 1990s, further increases in the number of transistors per chip made

application specific standard products (ASSP) economically advantageous. These are

standard IC chips that are designed for a specific system/product and often for a specific

standard module in that system/product. Their designs are somewhat similar to those of either

microprocessors or ASICs since many ASSPs are microprocessors that are configured for a

specific application and often done so using ASIC-design techniques. The difference between

ASSPs and ASICs is that ASSPs are standard ICs that are used by different producers of a

final electronic product.

The primary driver of growth in the market for ASSPs is the increasing volumes of

electronic products. Until the market for PCs began to grow in the 1980s electronic products

that were both complex and produced in high-volumes did not exist. As the demand for

personal computers grew in response to their increasing performance and falling cost, Intel

and other semiconductor suppliers began customizing microprocessors and other ICs for PCs.

Similar things happened for video game consoles, servers, routers, mobile phones, MP3

players, tablet computers, and other electronic products. Semiconductor suppliers customized

microprocessors and other ICs for these electronic products as their volumes grew.

The increasing volumes of these electronic products are a direct result of the falling cost

of ICs. Although at any given point in time increases in volumes will likely lead to lower

costs, electronic products that are cheap enough to be demanded in high volumes did not

emerge until the price of ICs had fallen and their performance at risen to some level. In other

words, it is not the volumes of electronic products that are driving cost reductions in

semiconductors and ICs, it is the improvements in semiconductor manufacturing equipment

and the benefits from reductions in scale that are driving cost reductions in semiconductors

66

Page 67: Exponential change: what drives it, what does it tell us about the future?

and thus enabling larger numbers of people to purchase electronic products such as PCs,

mobile phones, and digital set-top boxes.

The emergence of logic, memory, microprocessor and application specific ICs, and also of

ASSPs have created opportunities for new entrants and to a lesser extent incumbents.

Throughout the 1970s, 1980s and 1990s, new firms emerged and to some extent they

continued to emerge in the 2000s to offer these new types of ICs. No longer do the large

vertically integrated U.S., Japanese and European electronic firms dominate the

semiconductor industry; large numbers of U.S. design houses are the leaders in many areas

and they continue to find new profitable niches for their products. These trends are expected

to continue as the rising cost of fabrication facilities favor design houses and as the increasing

number of transistors on a chip require new organizations of transistors, which new entrants

often find better than do incumbents.

The increasing number of transistors on a chip and the emergence of new forms of ICs

have also created new opportunities for new entrants and to some extent incumbents in

electronic systems. Rather than incumbent suppliers of consumer electronics and computers,

new entrants succeeded in PCs, video game consoles, servers, routers, mobile phones, MP3

players, tablet computers, and other electronic products. Whatever the explanation for

incumbent failure, new entrants have exploited these opportunities better than have

incumbents and thus understanding the types of new systems and the new organizations of

transistors that are being created by Moore’s Law and other component improvements is of

great importance for electronic firms and students hoping to be successful entrepreneurs.

We also believe that understanding the impact of improvements in components on systems

can help us identify technologies that are potentially disruptive technologies than can Clayton

Christensen’s theory of disruptive technologies. Not only did smaller hard disk drives emerge

and experience improvements because the platters benefited from reductions in scale, smaller

67

Page 68: Exponential change: what drives it, what does it tell us about the future?

and better electronic systems also emerged because the ICs and to a lesser extent platters

benefited from reductions in scale. While Clayton Christensen’s theory of disruptive

innovation implies that the improvements in the new hard disk drives and the new electronic

systems were driven by demand for the new systems, in reality the improvements in them

were driven by the large benefits from reducing the scale of features on hard disk drives and

ICs.

Furthermore, it was the demand for the old systems that motivated these improvements in

ICs and platters and the demand for the new systems did not motivate these improvements in

ICs and platters until long after the new systems were introduced. For example, the

improvements in ICs that made the first personal computers, mobile phones and other

electronic products possible were being driven by the demand from previously introduced

electronic products. It was not until there were significant amounts of demand for the new

products, that their demand became an important motivation for improvements in ICs. Thus,

cumulative production in PCs was not directly or even indirectly the main driver of

improvements in PCs until long after PCs were introduced and the PCs had become a key

driver for improvements in ICs and magnetic storage.

The upshot of this discussion is that if one is searching for potentially disruptive

innovations, one should look for technologies that are experiencing rapid improvements and

one way to find these technologies is to find technologies that benefit from reductions in

scale. Technologies that benefit from reductions in scale, either directly or indirectly through

their impact on higher level systems, will experience rapid improvements and thus they are

more likely to become economically feasible or to require large changes in design. This is

particularly true for high-level systems where rapid improvements in components have and

continue to lead to many changes in the way the higher-level systems are designed. The next

two sections discuss the opportunities that are emerging in ICs and to a lesser extent in

68

Page 69: Exponential change: what drives it, what does it tell us about the future?

electronic systems. Part II focuses on the opportunities that are emerging in electronic

systems from improvements in ICs and other components.

3.4 Discussion

The benefits from reductions in scale can be very large. Magnetic platters, magnetic tape

and ICs have experienced very rapid rates of improvements because they benefit from

reductions in scale. Reductions in the scale of magnetic storage regions, transistors and other

features have led to improvements of about 10 orders of magnitude for both magnetic platters

and ICs. Furthermore, these improvements still continue and they drive improvements in

magnetic hard disk drives, magnetic tape systems, and in electronic products such as

computers, mobile phones, televisions, set-top boxes, eBooks, and cameras, and medical

equipment. In combination with creating new materials for LEDs, photosensors, and optical

fiber, these improvements have led to improvements in the bandwidth, response time and

other measures of performance for the Internet.

Thus, finding technologies that benefit from reductions can scale, either directly or

indirectly through their impact on higher level systems, can help us find both technologies

with a large potential for improvements and technologies that provide opportunities for new

entrants. Part II looks at some of these technologies in more detail with a focus on finding

technologies with a large potential for improvements in cost and performance. Chapters 5

through 8 analyze ICs, MEMS, bio-electronic ICs, and nanotechnology, technologies that

directly benefit from reductions in scale, along with the impact of these components on

existing and new forms of electronic products. Chapters 9 and 10 analyze

telecommunications and human-computer interfaces, technologies that indirectly benefit from

reductions in scale through the impact of ICs on them.

Finally, achieving these reductions in scale was not easy for ICs and magnetic storage and

they also won’t be easy for MEMs bio-electronic ICs, and nanotechnology. Reductions in

69

Page 70: Exponential change: what drives it, what does it tell us about the future?

scale required improvements in manufacturing equipment for both ICs and magnetic platters

and also advances in science. Deeper and broader knowledge of solid state and plasma

physics were needed for ICs and such knowledge of magnetic materials were also needed for

magnetic storage. With MEMS, bio-electronic ICs, and nanotechnology, and broader and

deeper levels of knowledge are needed in even a wider number of scientific areas.

70

Page 71: Exponential change: what drives it, what does it tell us about the future?

Chapter 4

Geometric Scaling: Increases in Scale

The concept of geometrical scaling can also help us understand the technologies that benefit

from increases in physical scale, again by focusing on the relationship between the geometry

of a technology, the scale of it, and the physical laws that govern it. Since costs typically rise

as scale is increased for most technologies including those that benefit from increases in

scale, the rates of improvement for technologies that benefit from increases in scale will not

be as large as those that benefit from reductions in scale. Instead, technologies that benefit

from increases in scale do so because the costs do not rise as fast as does the output, as scale

is increased. For many of these technologies, they benefit from increases in scale because

output is roughly proportional to one measure (e.g., length cubed or volume) more than is the

costs (e.g., length squared or area) thus causing output to rise faster than do costs, as the scale

of the technology is increased. This chapter examines the reductions in cost that have

occurred as the scale of equipment for production, energy, agriculture, and transportation

have been increased.

4.1 Production Equipment

The benefits from increases in physical scale of production equipment are often confused

with economies of scale. Continuous processing plants, aluminum smelters, and other

material processing plants exhibit economies of scale more than do assembly plantsxxxii

because the production equipment for the former benefits more from increases in physical

scale than do the production equipment for assembly plants. For example, the production of

organic and inorganic chemicals, plastics, paints, and pharmaceuticals (sometimes called

continuous flow processes) largely consist of pipes and reaction vessels where the costs of

pipes, i.e., surface area of cylinders, vary as a function of radius whereas the output from a

71

Page 72: Exponential change: what drives it, what does it tell us about the future?

pipe, i.e., volume of flow, varies as a function of radius squared. Similarly, the costs of

reaction vessels vary as a function of surface area (radius cubed) whereas the output of a

reaction vessel varies as a function of radius cubed. This is what was meant at the beginning

of the chapter by “output is roughly proportional to one measure (e.g., length cubed or

volume) more than is the costs (e.g., length squared or area) thus causing output to rise faster

than do costs, as the scale of the technology is increased.”

Empirical analysis has confirmed the advantages from these increases in scale in

continuous flow factories in that the capital costs of them is a function of plant size to the nth

power where n is typically between 0.6 and 0.7xxxiii. For example, the capacities of ethylene

and ammonia plants were increased by about 50 and 10 times respectively between the early

1940s and 1968. For the ethylene plants, this meant that the capital costs on a per unit basis in

1968 had fallen to about 25% their level of the early 1940sxxxiv. In other analyses, it has been

found that the cost of catalytic cracking (for gasoline) dropped by more than 50% for

materials, 80% for capital and energy, and 98% for labor between an installation in 1939 and

a later one in 1960xxxv.

Similar arguments can be made for furnaces and smelters, which occupy an intermediate

position between continuous flow and discrete parts factories in terms of the benefits from

geometric scaling. Furnaces and smelters are used to process metals and ceramics and

benefits from larger scale exist in their construction and operation. This is because similar to

the above examples the cost of constructing a cylindrical shaped blast furnace is largely a

function of surface area while the output is a function of volumexxxvi. These construction costs

include both material and processing costs where for example, the cost of welding together a

heat furnace is proportional to the length of seams while the capacity is a function of the

container’s volume. Similarly, the heat loss from blast furnaces and smelters is proportional

to the area of its surface while the amount of metal that can be smelted is proportional to the

72

Page 73: Exponential change: what drives it, what does it tell us about the future?

cube of the surface sidesxxxvii.

Although the cost of virtually every processed metal has dropped over the last 100 to 150

years, two examples are provided. First, the construction cost per ton of steel capacity

dropped by eight times as the capacity of a basic oxygen furnace was increased by about five

times in the 1950s and 1960s with the largest plants producing more than two million tons per

yearxxxviii. Similar things probably occurred in the 1800s as the scale of blast furnaces was

increased from 14.6 meters in 1830 to 28.6 meters in 1913xxxix.

Second, the energy per kilogram of finished aluminum that is made using the Hall-Heroult

process fell by about 75% between 1890 and 2000 as the scale of these plants was increased

by about 300 times. In this case, the size of the “cells” that are used to produce aluminum are

typically measured in terms of electrical current, i.e., amps, and increases in the current for a

single cell have led to lower energy usage and lower costs. By the year 2000, the energy

usage per kilogram was only about 50% higher than its theoretical minimum or in other

words, about one-third of the energy was heat lossxl.

A more recent technology that has experienced large reductions in cost from increases in

the scale of production equipment is liquid crystal displays (LCDs). Increases in the scale of

LCD production equipment have been accompanied by increases in the size of LCD

substrates where multiple LCD panels (e.g., ones for laptop computers) are processed on

single substrates followed by the division of these substrates into individual LCD panels. Two

analyses found that the capital costs per output are much lower for larger than smaller

substrates. One analysis found that the cost per output for one type of manufacturing

equipment was 88% cheaper for 2.7 than 0.17 square meter substratesxli. A second analysis

found that the cost per output for a complete production facility was 36% cheaper for 5.3 than

1.4 square meter substratesxlii.

There are several reasons for benefits to arise from increasing the scale of LCD substrates

73

Page 74: Exponential change: what drives it, what does it tell us about the future?

(and semiconductor wafers and DNA wash plates) and their production equipment. First, just

as for the chemical processes and furnaces discussed above, processing time (inverse of

output) has fallen as the volume of gases, liquids, and reaction chambers has become larger

while the costs have risen as a function of the equipment’s relevant surface area. Second, the

ability to process multiple LCD display screens on a single substrate supports the first reason.

This is because the loading time and the costs of the loading equipment do not increase much

with increases in substrate size. Third, there are smaller “edge effects” with larger than

smaller substrates. Edge effects refer to the fact that yields are lower near the edge of the

substrates and to the fact that the equipment must be much larger than the substrates in order

to have consistent conditions across the substrate. The latter means that the ratio of

equipment-to-substrate size decreases as the substrate size as increasedxliii.

Similar things have and are still occurring with semiconductor wafers, DNA wash plates,

solar cells and various types of displays. Increases in the size of semiconductor wafers have

contributed to the dramatic reductions in the cost of ICs. Wafer sizes are now 12” and

increases to 18” are planned. Increases in the size of DNA wash plates have contributed to the

dramatic reductions the cost of sequencing DNA. In both cases, they enable the simultaneous

processing of individual elements, transistors in the case of ICs and genes in the case of DNA

sequencing equipment.

Solar cells and various types of displays also benefit from increases in the scale of

substrates and production equipment and these increases are currently occurring at different

rates. For example, larger wafers for crystalline solar cells are being fabricated and processed

and even larger substrates are being implemented for new types of solar cells such as thin

film ones that are constructed from different types of semiconductor, organic, and photo-

sensitive materials. Some of these solar cells and displays can also be fabricated using roll

printing and as discussed in Part II, roll printing can be much cheaper than conventional

74

Page 75: Exponential change: what drives it, what does it tell us about the future?

techniques and it benefits from increases in scale.

The potential benefits from increasing the scale of production equipment in discrete parts

production are probably much smaller than they are from increasing the scale of solar cells,

displays, furnaces, smelters or continuous flow manufacturing plants. Although the cutting

speeds of lathes and boring machines were increased in order that these machines could

produce more parts per time and equipment cost than could smaller and slower machines, as

was discussed in Chapter 2xliv, it is harder to increase the scale of these processes and those

used to form or assemble small parts, than it is to increase the scale of pipes and reaction

vessels partly because individual parts must be moved and processedxlv. It is even harder to

increase the scale or even automate the processes used to assemble shoes or stitch together

apparel than it is to increase the scale of metal cutting, forming, and assembling processes.

Thus, mechanical products such as automobiles, appliances, and bicycles benefit more from

increases in scale than do shoes and apparel.

Consider automobiles. The price of a standard 4-seater Model T dropped from $850 in

1909 (equivalent to $20,091 today) to 440 in 1915 (equivalent to $9,237 today) and $290 in

1920s (equivalent to $3,191 today or similar to cheapest Tata-Nano) mostly because of

substituting equipment for labor. Since then the scale of automobile factories has been

gradually reduced over time and today few automobile factories produce more than 100,000

automobiles a year and most automobiles cost far more than $3,191. Thus, the types of

increases in scale that were discussed above for chemical factories where the scale of

production equipment was increased over time have not occurred with automobiles.

Furthermore, while today’s automobiles are much better than the Model T, clearly the price of

automobiles has not dropped to the extent that ICs, magnetic storage or even chemicals

probably have and it is doubtful whether a $3,000 automobile (about the price of a Tata-

Nano) could be manufactured today in a high-wage country such as the U.S., Germany, or

75

Page 76: Exponential change: what drives it, what does it tell us about the future?

Japanxlvi.

4.2 Energy Equipment

Energy equipment has also experienced large increases in scale and the benefits from these

increases in scale are a major reason why the power density of this equipment has been

increased by many orders of magnitude over the last 100 years. For example, the scale of

steam engines was increased by more than 100 times between 1700 and 1900, of steam

turbines by more than 1000 times between the late 1800s and 1950, and of internal

combustion engines by 100,000 times between the late 1800 and now. Power densities of

internal combustion engines for autos and aircraft were increased by 100 times between the

late 1800s and the mid-1900sxlvii.

Steam and internal combustion engines benefit from increases in scale because the output

from a cylinder and a piston is roughly a function of volume while costs are roughly a

function of the external surface area of both the piston and cylinder (See Figure 4.1). Thus, as

the diameter of the cylinder (and piston) is increased, the output of the engine increases as a

function of diameter squared while the costs only rise as a function of diameter. Steam

engines also benefit from increases in the scale of the boiler; as the diameter of the boiler is

increased, the output of the engine increases as a function of diameter cubed while the costs

only rise as a function of diameter squared.

Figure 4.1 The Role of Geometric Scaling in Engines

76

Page 77: Exponential change: what drives it, what does it tell us about the future?

Furthermore, these increases in scale also facilitated the use of higher temperatures and

pressures and both of these enabled higher thermal efficiencies. The efficiencies of steam

engines, internal combustion engines, diesel engines, gas engines, and combined cycle gas

engines have been increased over the last 300 years. Lumping them all together, these

efficiencies were increased from about 1% in 1700 for steam engines to more than 50% for

combined cycle gas turbines at the end of the 20 th century. However, the increases in

temperatures and pressures also depended on other factors such as creating materials that can

withstand high temperatures and better tolerances that enabled closer fits between for

example pistons and cylinders: these factors were covered in Chapter 2.

To look in more detail at how scale impacts on cost per output, consider the price per

horsepower (HP) of steam engines in 1800 and of existing internal combustion engine. A 20-

horsepower steam engine was 74% cheaper than a 10-horsepower one and a 225-horsepower

77

Page 78: Exponential change: what drives it, what does it tell us about the future?

internal combustion engine was 2/3 cheaper than a 2.3 horsepower one in 2010, which is a

major reason that the price per horsepower fell as scale was gradually increased over many

years. Furthermore, much larger engines have been installed and their implementation

suggests that costs per horsepower continue to fall as scale is increased. For example, a

90,000 horsepower marine enginexlviii is used in ships and much larger versions of steam

engines have been implemented, including their modern day version, the steam turbinexlix.

Although cost or price data for these engines are not available, if the same benefits from

increases in scale were to exist in a change from 225 to 90,000 horsepower as found with a

change from 2.3 to 225 horsepower, such an extrapolation would mean that the 90,000

horsepower engine would be about 1% the price per HP of the 2.3 engines; this is consistent

with the several orders of magnitude improvements in the power density for engines, which

were mentioned above.

The falling price of electricity is also largely attributed to increases in the scale of steam

turbines, boilers, generators, and transmission lines. Increases in their scale led to reductions

in the capital cost per unit of output due to both the benefits of geometrical scaling and the

fact that increases in scale enabled uses of higher temperatures and pressures, which led to

higher thermal efficiencies as discussed earlier. For example, the cost per installed capacity

dropped from about $78/kilo-watt for a 100-MW coal-fired plant to about $32/kilo-watt for a

600 MW plant, both in 1929 dollars and the cost benefits from increases in scale were

probably even larger for plants under 100 MW. Nuclear plants experienced similar levels of

cost reductions as their scale was increased in the 1960s and 1970sl.

The scale of transmission equipment (in volts) rose by more than 10,000 times between

1880 and 1965 and the price per distance of transmission fell by more than 99.9% between

1880 and 1965 (See Figure 4.2). The reason for the benefits from increases in scale is that

higher voltages require larger cables and energy loss is a function of the cable’s surface area

78

Page 79: Exponential change: what drives it, what does it tell us about the future?

while transmission is a function of the cable’s volume li. Another factor driving reductions in

the price per distance of transmission was improvements in dielectric materials.

Figure 4.2 Increases in Scale (1000s Volts – right) and in Powered Distance per Cost (Watt x

km per Dollar - left)

Koh, H. and Magee, C. 2008. A functional approach for studying technological progress:

Extension to energy technology, Technological Forecasting and Social Change 75: 735-758.

The end result is that the price of electricity fell from $4.50 per kilowatt hour in 1892 to

about $0.09 by 1969 in constant dollars (See Figure 4.3)lii. This has had a dramatic impact on

our homes, our factories and other aspects of our lives. For example, it contributed towards

the implementation of automation in factories and of new processes in the early 20 th century.

From 1969, however, fewer increases in scale were implemented and many observers now

argue that the U.S. had already implemented too much scale and more scale than did Europe

due to institutional differences. The cost of electricity in the U.S. has risen since 1969 where

79

Page 80: Exponential change: what drives it, what does it tell us about the future?

excessive scale is cited to a similar extent as is increased fuel cost. Some argue that smaller

generating plants such as combined cycle gas turbines that produce both heat and electricity

have much higher efficiencies and thus lower costs than larger coal-fired power plantsliii.

Figure 4.3 U.S. Electricity Prices (1996$ per kw-hour)

Source: Hirsh R 1989. Technology and Transformation in the Electric Utility Industry, Cambridge

University Press

A third example of benefits from increases in scale can be found in jet engines. They have

benefited from increases in scale for some of the same reasons that steam turbines do. Like

the pipes in a chemical plant, a jet engine’s combustion chamber benefits from increases in

scale in that the costs rise roughly with surface area while output rises with volume. As with

other engines, larger engines (and better materials) enable higher pressures and temperatures

80

Page 81: Exponential change: what drives it, what does it tell us about the future?

and these higher pressures and temperatures enable higher thermal efficiencies (See Figure

4.4) where jet engines operate at much higher temperatures than do other engines. Many

engineers forecast further increases in temperature and thus efficiency due to the use of better

temperature resistant materials (See Chapter 2), increases in scale and changes in designliv.

4.3 Transportation Equipment

Many types of transportation equipment also benefit from increases in scale since their

costs tend to rise with surface area while capacity tends to rise with volume. Since aircraft are

shaped like a long cylinder, albeit one with wings, their costs rise with increases in a

cylinder’s diameter and their capacity rises with increases in a diameter square. This is why

the diameter of fuselages have been increased such that some jumbo jets have two decks.

Coupled with the benefits for jet engines from increases in scale, large aircraft have

advantages in capital, fuel, and other costs. The current capital costs per passenger of large

aircraft (A380) are 14% lower than those of small aircraft (A318) and large aircraft currently

81

Page 82: Exponential change: what drives it, what does it tell us about the future?

have 48% lower fuel consumption per passenger than do small aircraft (See Figure 4.5). For

capital costs, this comparison is between an A380 (900 passengers) and an A318 (132

passengers). For fuel costs, the comparison is between a number of aircraft with passenger

capacities between 40 and 220. For simplicity these comparison assume all the seats are

economy ones. Other benefits from increases in scale probably include lower crew and

landing costs per passenger.

Figure 4.5 Fuel Consumption (kg/passenger) vs. Scale (number of passengers)

Source: Morrel P 2007. Presentation to ATRS Conference

Furthermore, the advantages of scale for aircraft become even more apparent when one

82

Page 83: Exponential change: what drives it, what does it tell us about the future?

considers that some of the first commercial aircraft, the DC-1 (early 1930s), could only carry

12 passengerslv. Since the benefits from increases in scale are probably larger for early than

for later increases in scale, it is probably true that the benefits from the increases in scale

from 12 to 40 in fuel costs or from 12 to 132 passengers in capital costs per passenger were

greater than the benefits from the increases in scale from 40 to 220 or from 132 to 900.

Nevertheless, just assuming the benefits from increases in scale are relatively constant over

the range of aircraft size, the A380 has a price per passenger almost 1/2 that of the DC-1lvi.

Ships and tankers also benefit from increases in scale because like aircraft, their costs rise

with surface area while capacity tends to rise with volume and their engines also benefit from

increases scale. They are shaped like a long cylinder so their costs rise with increases in a

cylinder’s diameter and their capacity rises with increases in a diameter squared. Coupled

with the benefits from increasing the scale of engines, which were discussed above, large

freighters and oil tankers have lower costs per capacity than do smaller ones. The existing

capital costs per capacity for oil tankers and freight vessels are 59% and 52% lower for larger

(265 and 170 kilo-tons) than for smaller (38.5 and 40 kilo-tons) oneslvii.

Furthermore, like aircraft, the advantages of scale become even more apparent when one

considers that some of the first oil tankers were very small (e.g., 1807 tons in late 19 th

century)lviii and that the benefits from increasing the scale from 1807 to 38,500 tons were

probably larger than from increasing the scale from 38,500 to 265,000, which are the sizes for

the above calculation. Nevertheless, assuming that the benefits from increases in scale are

relatively constant over the range of tanker size suggests that today’s largest oil tanker

(265,000 tons) is almost 1/20 the price per ton of an 1807-ton tanker. Similar reductions in

cost probably occurred with the change from the small freighters of the 19 th century to the

medium sized containers of today.

Although data is not available, buses and trucks also benefit from increases in scale since

83

Page 84: Exponential change: what drives it, what does it tell us about the future?

their costs tend to rise with surface area while capacity tends to rise with volume (and engines

benefit from increases in scale). Buses that can carry 300 passengers are used in China and

clearly the capital and operating costs are lower on a per passenger basis for buses than for

single passenger vehicles, as long as the buses achieve some level of capacity utilization.

Similar arguments can be made for mining, construction (e.g., excavators) and other types of

large trucks particularly when their sizes are not limited by roads, container sizes, or other

infrastructure. This might be one reason why the cost of metals fell throughout much of the

20th century.

Of course benefitting from these increases in scale did not come easy. Stronger roads and

larger bridges, ports, canals, and airports, were needed to accommodate these larger vehicles,

aircraft, and ships, where much of this infrastructure also benefited from increases in scale.

At least to some level of scale, the capital costs of larger bridges and cranes are probably

smaller than that of smaller ones. For the transportation equipment themselves, advances in

thermodynamics, combustion, and fluid flow and improvements in materials, i.e.,

components, were necessary for the benefits from larger scale to emerge. Larger buses

required improved steel, better internal combustion engines required improvements in other

materials such as aluminum and plastics. Larger aircraft has required improvements in

aluminum, jet engines, and more recently composites; the weight of aircraft has dropped

significantly over the last 20 years as the strength to weight ratio of materials has been

increased several times, as was discussed in Chapter 2lix.

The benefits from these increases in scale can also be seen in the falling cost of transport

for both freight and humans. Transportation’s share of U.S. GDP in 2000 was 1/10 its level in

1870 (See Figure 4.6), the U.S. freight bill divided by GDP was 50% lower in 2000 than it

was in 1960, and the dollars per ton-mile in 2000 for rail in the U.S. was 1/10 its level in

1890. The falling cost of computers, which came from the falling cost of ICs, which were

84

Page 85: Exponential change: what drives it, what does it tell us about the future?

covered in Chapter 3, also drove these reductions in transportation cost. Furthermore, the

falling costs of computers, the benefits from increases in the scale of transportation

equipment, the benefits from containerized shipping, and political changes are major drivers

of globalization.

Figure 4.6 Transportation Costs as a Percent of GNP

Glaeser E and Kohlhase J 2004. Cities, regions and the decline of transport costs, Papers in

Regional Science 83: 197–228

4.4 Agriculture

Agricultural also benefits from increases in scale. Evidence can be seen for this in the large

scale farms in the U.S., Canada, and Australia, their low costs on the global market, and the

large irrigation systems and equipment they use. Irrigation systems benefit from increases in

85

Page 86: Exponential change: what drives it, what does it tell us about the future?

scale for some of the same reasons discussed above. The cost of a large irrigation canal is

largely a function of its outer surface area (function of radius) while its output is a function of

radius equipment. Similarly, pumps, a key component in an irrigation system, benefit from

increases in scale just as engines do

Agricultural equipment is a bit more complex. While improvements in yields per acre are

primarily driven by better seeds along better fertilizers, herbicides, and pesticides,

improvements in output per worker are driven by mechanization and the increasing scale of

the equipment. Table 4.2 summarizes the story for wheat and similar stories can be told for

other crops. The manual plowing, soil smoothing, seed planting, and harvesting were

replaced by equipment where improvements in this equipment reduced the number of

necessary workers. The introduction of tractors followed the introduction of automobiles and

improvements in these tractors and their engines enabled the increasing mechanization of

wheat cultivation. From the end of World War II, this equipment has become increasingly

specialized and gradually larger (See example in Figure 4.7). Improvements in engines

enabled increases in the width of harvesting equipment where their capital and operating

costs probably rose slower than did their output as the equipment was made wider.

Table 4.2 Mechanization of Agriculture and Increases in Scale of Mechanization for Wheat

Year Agricultural Method

1830 250-300 labor-hours required to

produce 100 bushels (5 acres) of wheat

with walking plow (break up soil),

brush harrow (smooth soil), hand

broadcast of seed, sickle (for cutting),

and flail (separate kernels from rest of

plant)

1890 40-50 labor-hours required to produce

86

Page 87: Exponential change: what drives it, what does it tell us about the future?

100 bushels (5 acres) of wheat with

gang plow, seeder, harrow (to cover

seeds), binder (cut wheat and tie into

bundles), thresher (separates grain from

straw), wagons, and horses

1930 15-20 labor-hours required to produce

100 bushels (5 acres) of wheat with 3-

bottom gang plow, tractor, 10-foot

tandem disk (loosens soil after

plowing), harrow, 12-foot combine, and

trucks

1955 6 1/2 labor-hours required to produce

100 bushels (4 acres) of wheat with

tractor, 10- foot plow, 12-foot row

weeder, harrow, 14-foot drill (for

placing seeds), self-propelled combine

and trucks.

1965 5 labor-hours required to produce 100

bushels (3 acres) of wheat with tractor,

12- foot plow, 14-foot drill, 14-foot

self-propelled combine, and trucks

1975 3-3/4 labor-hours required to produce

100 bushels (3 acres) of wheat with

tractor, 30-foot sweep disk, 27-foot

drill, 22-foot self-propelled combine,

and trucks

1987 3 labor-hours required to produce 100

bushels (3 acres) of wheat with tractor,

35-foot sweep disk, 30-foot drill, 25-

foot self-propelled combine, and trucks

Source: http://www.agclassroom.org/gan/timeline/farm_tech.htm

87

Page 88: Exponential change: what drives it, what does it tell us about the future?

Figure 4.7 Example of Large-Scale Farm Equipment

4.5 Discussion

Many types of equipment benefit from increases in scale and these increases in scale are a

major driver of cost reductions in a number of industries. While these increases in scale do

not lead to the rates of improvement found with reductions in scale in for example, ICs or

magnetic storage, increases in scale do have a large impact on increases in the productivity of

the agricultural sector, which has enabled both increases in population and a diversification of

human activities. Increases in scale are also a major driver of improvements in transportation,

energy, and production equipment, which have played important roles in reducing the costs of

transport, allowing us to replace human with machine power, and enjoy many types of

products.

One might ask whether cumulative production is a driver of these increases in scale.

Cumulative production is mentioned here because it is used in learning curves, which many

use to analyze cost improvements. We acknowledge that cumulative production is a driver of

these increases and their associated reductions in cost since larger volumes are needed to

justify increases in scale.

88

Page 89: Exponential change: what drives it, what does it tell us about the future?

However, some of these increases led to lower cumulative production in terms of units and

thus these increases in scale are only partly driven by increases in cumulative production. If

cumulative production were the key driver for cost reductions in engines, electricity

generation, and transportation equipment, firms would produce many small engines, steam

turbines, boilers, oil tankers, freighters, and aircraft in order to increases cumulative

production and thus reduce cost through the implementation of automated equipment and its

organization into flow lines. But firms don’t produce don’t do this; they produce large ones

because the large ones have lower cost and/or higher performance.

Understanding the types of technologies that benefit from increases in scale can help us

find those technologies that might have a large potential for reductions in cost. One of the

best examples of this can be found in LCDs and roll-to roll printing. The benefits from

increases in the scale of their substrates and equipment have driven cost reductions that are

enabling these displays to be used in larger numbers of applications. Chapter 9 discuses other

improvements in LCDs and the impact of these improvements for the global economy; these

improvements would not exist if LCDs did not benefit from increases in the scale of their

substrates and production equipment. Furthermore, the example of LCDs can help us

understand the potential for reductions in cost for other types of displays.

There are several other key things to remember. First, this chapter or this book is not

advocating that we build large things. This book is merely arguing that increases in scale are

an important method of reducing cost and improving performance. There are many things

that must be considered when the scale of a system is considered where the low cost of this

equipment is just one of these things to consider.

Second, benefiting from increases in scale is not a simple matter of merely stimulating

demand. Increases in scale often require improvements in complementary technologies and

thus understanding the rates of improvements in these complementary technologies can help

89

Page 90: Exponential change: what drives it, what does it tell us about the future?

us understand that rates at which increases in scale might occur. Furthermore, these

improvements in complementary technologies might be better stimulated by increases in

R&D for them, rather than by just subsidizing the demand for the final product.

Third, increases in scale do not lead to the rates of improvement that are found with

reductions in scale or even creating materials. Thus, we cannot expect rapid rates of

improvements from technologies that only benefit from increases in the scale of production

equipment. This highlights some of the problems with the learning curve and tying cost

reductions to cumulative increases in production.

Finally, in summary for Part I, dramatic improvements in performance did not occur during

the early years of a technology’s life cycle, as is often argued. Many argue that performance

jumps occur as part of the front end of an S-curve as increases in demand drive increases in

R&D spending and/or as R&D funding moves from an old technology to a new technology.

However, most of the trajectories shown in Part I including the ones shown in Chapter 4 did

not experience these jumps and instead many of them approximate straight lines. One reason

they approximate straight lines is because improvements occur in an incremental manner in

which each one is built on top of previous ones. Furthermore, for new materials, engineers

and scientists are creating these new materials not in response to a demand for a final

product, but in their attempts to publish papers and receive patents. Similar arguments can be

made for systems that are composed of components that are experiencing rapid

improvements. The improvements in the components are driven by old systems not new ones

and thus we would not expect to see dramatic jumps in performance following the

introduction of new types of computers or medical equipment.

A lack of jumps is also consistent with a conclusion that research is done in a decentralized

world of so-called open innovation. As opposed to the interpretation that R&D funding in

vertically integrated organizations moves from old to new technology, many technologies are

90

Page 91: Exponential change: what drives it, what does it tell us about the future?

being simultaneously developed in a vertically disintegrated organizations in which

publications, patents, and fame motivate many of the world’s researcher, which number in the

millions. These researchers are pursuing a number of technologies that are experiencing

exponential rates of improvement and Part II discusses many of these technologies.

Finally, Part I helps us understand those technologies that are or will likely experience rapid

improvements. Since data on rates of improvement are not always available, particularly for

technologies that very new, understanding the reasons for rapid rates of improvements can

help us identify those technologies with the potential for rapid improvements. This is

discussed in Part II. Technologies that benefit from reductions in scale (e.g., integrated

circuits) have experienced much more rapid improvements than have technologies that

benefit from increases in scale (e.g., engines). Creating new materials (and processes for

them) often leads to rapid improvements in performance and cost when new classes of

materials are continuously being created.

91

Page 92: Exponential change: what drives it, what does it tell us about the future?

92

Page 93: Exponential change: what drives it, what does it tell us about the future?

i The precise definition of exponential improvements actually includes rates of less than 2% a year and thus doubling times as great as 40 years. But we are focusing on technologies that have much more rapid rates, such as greater than 15% a year. ii Hirsh R 1989. Technology and Transformation in the Electric Utility Industry, Cambridge University Press; Tarascon, J. 2009, Batteries for transportation now and in future, Presented at Energy 2050, Stockholm Sweden, October 19-20. Renewable Energy Sources and Climate Change Mitigation: Special Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. 2013iii An exception is glass fiber.iv Bonner, J. Why Size Matters: From Bacteria to Blue Whales, Princeton University Press, 2006. Schmidt-Nielsen K 1984. Scaling: Why is Animal Size so Important?v See August 2012 issue of National Geographicvi See for example, Lipsey, R. Carlaw, K. and Bekar, C. 2005. Economic Transformations, NY: Oxford Univ Press. Sahal, D. 1985. Technological guideposts and innovation avenues, Research Policy 14: 61-82. Winter S 2008. Scaling heuristics shape technology! Should economic theory take notice? Industrial and Corporate Change 17(3): 513–531. vii For example, many argue that R&D funds moved from charcoal to coal as deforestation occurred and later they moved from whale oil to petroleum as sperm whales were almost hunted to extinction. Robert Ayres, Invited Lecture (and Conference) at INSEAD, April 10, 2013. Robert Ayres,viii http://www.nsf.gov/statistics/seind12/c0/c0s5.htmix Kahn, H and Wiener A, 1967, The Year 2000, A Framework for Speculation on the Next Thirty-

Three Years, NY: Macmillan; Kaku M. 2011. Physics of the Future, NY: Doubleday. Stevenson, M 2011. An Optimists Tour of the Future, Profile Books.

x Daniel Kahneman, Thinking Fast and Slow, 2011xi de Weck, O., Roos, D., and Magee, C. 2011. Engineering Systems, Cambridge: MIT Press.xii Sources, from top to bottom: Nordhaus, W. 1997. Do real output and real wage measures capture reality? The history of Light suggests not. in The Economics of New Goods, Gordon R. and Bresnahan T (ed), University of Chicago Press for National Bureau of Economic Research: 29-66. Azevedo I, Morgan G, Morgan F, 2009. The Transition to Solid State Lighting, Proceedings of the IEEE 97: 481-510. Sheats, J, H Antoniadis, M Hueschen, W Leonard, J Miller, R Moon, D Roitman, A Stockinget, Organic Electroluminescent Devices, Science 273 (1996): 884-888. Lee, C 2005. OLED 1 – Introduction, http://wenku.baidu.com/view/783fa93283c4bb4cf7ecd196.html. Martinson R 2007. Industrial markets beckon for high-power diode lasers, Optics, October: 26-27. ww.nlight.net/nlight-files/file/articles/OLE%2010.2007_Industrial%20markets...pdf. Suzuki T, 2010, Challenges of Image-Sensor Development, Int’l Solid State Sensors Conference. Nemet, G. 2006. Beyond the learning curve: factors influencing cost reductions in photovoltaics, Energy Policy 34: 3218-3232. Alexander A and Nelson J 1973.Measuring technological change: aircraft turbine engine, Technological Forecasting and Social Change 5, 189–203. Sahal, D. 1985. Technological guideposts and innovation avenues, Research Policy 14: 61-82. Koh, H. and Magee, C. 2008. A functional approach for studying technological progress: Extension to energy technology, Technological Forecasting and Social Change 75: 735-758. Wikipedia, 2013. en.wikipedia.org/wiki/Moore's_law, last accessed on January 30, 2013. Stasiak J, Richards S, and Angelos S 2009. Hewlett Packard's Inkjet MEMS Technology, Proc. of Society of SPIE: 7318, http://144.206.159.178/ft/CONF/16431771/16431793.pdf. Koh H and Magee, C. 2006. A functional approach for studying technological progress: Application to information technologies, Techn Forecasting and Social Change 73: 1061-1083. Koomey J, Berard S, Sanchez M, Wong H, 2011. Implications of Historical Trends in the Electrical Efficiency of Computing, IEEE Annals of the History of Computing 33(3): 46-54. Economist, 2012. 2012. Television Making: Cracking Up, January 21, 2012, p. 66. Kurzwell, R., 2005, The Singularity is Near, NY: Penguin Books. Kalender W. 2006. X-ray computed tomography, Physics in Medicine and Biology 51: 29-53. Shaw J and Seidler P 2001. Organic electronics: Introduction, IBM Journal of R&D; 45(1): 3-9. Dong H, Wang

Page 94: Exponential change: what drives it, what does it tell us about the future?

C, Hu W 2010. High Performance Organic Semiconductors for Field-Effect Transistor, Chemical Communications 46: 5211-5222 Koh H and Magee, C. 2006. A functional approach for studying technological progress: Application to information technologies, Techn Forecasting and Social Change 73: 1061-1083. Amaya, M and Magee, C. 2008. The Progress in Wireless Data Transport and its Role in the Evolving Internet, MIT Technical Report. NHGRI, 2012. National Human Genome Research Institute. www.genome.gov/sequencingcosts/, Last accessed on September 10, 2012. Seth G, Hossler P, Yee J and Hu W 2006. Engineering cells for cell culture bio-processing. Advances in Biochemical Engineering/Biotechnology101: 119-164. U.S. Department of Agriculture), 2012. Last accessed on September 10, 2012 www.ers.usda.gov/data-products/agricultural-productivity-in-the-us.aspx. Glaeser E and Kohlhase J 2004. Cities, regions and the decline of transport costs, Papers in Regional Science 83: 197–228. Martino J 1971. Examples of Technological Trend Forecasting for

Research and Development Planning. Technological Forecasting and Social Change 2: 247-260. NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press. Ayres, R. and Weaver, P. 1998. Eco-restructuring: implications for sustainable development, NY: United Nations University Press. American Machinist 1977. Metalworking: Yesterday and Tomorrow, November.

xiii NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press.xiv Ayres, R. and Weaver, P. 1998. Eco-restructuring: implications for sustainable development, NY: United Nations University Press.xv Tarascon, J. 2009. Batteries for Transportation Now and In the Future, presented at Energy 2050, Stockholm, Sweden, October 19-20.xvi http://www.nec.com/en/press/201310/global_20131001_03.htmlxvii http://phys.org/news/2013-09-sodium-ion-battery-cathode-highest-energy.htmlxviii See for example Smil, V 2010. Energy Transitions, NY: Praeger, Figure 4.1 or (http://en.wikipedia.org/wiki/Energy_density, last accessed on January 25, 2010).xix NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Press and http://www.tdk.co.jp/magnet_e/superiority_02xx http://pubs.usgs.gove/fs/2002/fs087-02xxi NAS/NRC, 1989. Materials Science and Engineering for the 1990s. National Academy Pressxxii"1971–1985 Continuing the Tradition". GE Innovation Timeline. General Electric Company. Retrieved 2012-09-28.xxiii U.S. Department of Agriculture and Michael Bomford, Crop Yield Projections For Biofuels Fall Short, http://energyfarms.wordpress.com /2009/09/03/crop-yield-projections-miss-biofuel-report-target/xxiv https://www.soils.org/publications/cs/articles/46/2/528xxvMichael Bomford Crop Yield Projections For Biofuels Fall Shortxxvi Weil A (2004). Health and Healing: The Philosophy of Integrative Medicine, Houghton Mifflin, Boston MA. xxvii See Christensen, McKendrick, King and Tuccixxviii Kurzwell, R., 2005, The Singularity is Near, NY: Penguin Books.ICKnowledge, 2009.xxix Smith R 1988. A Historical Overview of Computer Architecture, IEEE Annals of the History of Computing 10(4): 277-303. xxx Koomey J, Berard S, Sanchez M, Wong H, 2011. Implications of Historical Trends in the Electrical Efficiency of Computing, IEEE Annals of the History of Computing 33(3): 46-54.xxxi For example, see Figure 8-2 in Rowen,C 2004. Engineering the Complex SOC, NY: Prentice Hall xxxii See for example: Chandler, A 1994, Scale and Scope: The Dynamics of Industrial Capitalism,

Boston: Belknap. Gold B 1981. Changing Perspectives on Size, Scale, and Returns: An Interpretive Survey, Journal of Economic Literature 19(1): 5-33. Freeman C and Soete, L 1997. The Economics of Industrial Revolution, MIT Press. Pratten, C. 1971.Economies of Scale in Manufacturing Industry, Cambridge University Press.

Page 95: Exponential change: what drives it, what does it tell us about the future?

xxxiii For example, see Haldi, J. and D. Whitcomb 1967, Economies of scale in industrial plants, Journal of Political Economy 75: 373–385. Axelrod L., Daze R. and Wickham, H. 1968. The large plant concept, Chemical Engineering Progress 64(7): 17. Rosenberg N 1994. Exploring the black box, Cambridge University Press. Mannan, S. 2005. Lee’s Loss Prevention in the Process Industries, Vol. 1, Burlington, MA: Elsevier Butterworth-Heinemann. Levin, R. 1977, Technical change and optimal scale: some implications, Southern Economic Journal 2:208–221. Winter S 2008. Scaling heuristics shape technology! Should economic theory take notice? Industrial and Corporate Change 17(3): 513–531.xxxiv If n is 0.65, then the capital costs for the five million ton plant are 0.0002 of those for the 0.1 million ton plant on a per unit basis (Axelrod et al, 1968; Mannan, S. 2005). xxxv Enos J 1962. Petroleum progress and profits, The MIT Press, Cambridge, MA.xxxvi The height of a blast furnace wad doubled and the diameter increased by 30% between 1830 and 1913 (Smil, 2005, Figure 4.1). xxxvii Lipsey, R. Carlaw, K. and Bekar, C. 2005. Economic Transformations, NY: Oxford Univ Press. For example, energy costs for ammonia production fell by one-half and by 70% for the best plants between 1920 and 2000. Smil, V. 2008. Energy in Nature and Society, MIT Press, Figure 10.6)xxxviii Gold, B. 1974. Evaluating Scale Economies: The Case of Japanese Blast Furnaces, The Journal of Industrial Economics 23(1): 1-18xxxix Smil, V. 2005. Creating the Twentieth Century, Oxford University Press.xl See for example, Smil, V. 2005. Creating the Twentieth Century, Oxford University Press., Figure 4.12), the electrochemistry encyclopedia (http://electrochem.cwru.edu/encycl/art-a01-al-prod.htm), and aluminum statistics from the U.S. Geological Survey (minerals.usgs.gov/ds/2005/140/aluminum.pdf)..xli Keshner, M and Arya, R 2004. Study of Potential Cost Reductions Resulting from Super-Large

Scale Manufacturing of PV Modules, Final Report for National Renewable Energy Laboratory (NREL/SR-520-36846)

xlii DisplaySearch, 2010. Flat Panel Display Market Outlook, http://www.docstoc.com/ docs/53390734/Flat-Panel-Display-Market-Outloo, last accessed on April 22, 2012

xliii (Keshner and Arya, 2004; Display Search, 2010)xliv Rosenberg, N. 1963. Technological Change in the Machine Tool Industry, 1840-1910, The Journal of Economic History 23 (4): 414-443. Rosenberg, N. 1969. The Direction of Technological Change: Inducement Mechanisms and Focusing Devices, Economic Development and Cultural Change 18 (1): 1-24. Hounshell, D. 1984. From the American system to mass production, 1800-1932: The Development of manufacturing technology in the United States, Baltimore: Johns Hopkins University Press.xlv It is difficult to increase machining and part handling speeds particularly since individual parts must be handled and processed. Some of these and related issues are addressed by Rosenberg (1969). xlvi The low price of the Tata Nano is due more to the low wages of Indian workers and the low regard for safety than to the efficiency of the design and the production system.xlvii Smil V. 2010. Energy Transitions. NY: Praeger. Koh H and Magee C. 2008. A Functional Approach for Studying Technological Change, Technological Change and Social Forecasting 75: 735-758. xlviii http://news.softpedia.com/news/How-Does-The-World-039-s-Biggest-Combustion-Engine-Work-54883.shtmlxlix Smil V. 2010. Energy Transitions. NY: Praeger.l See (Hirsh, 1989) and in particular Figures 16 through 21. Increasing the voltage in transmission systems dramatically reduced the energy losses in long-distance transmission and without low energy losses, it would have been difficult to benefit from the geometrical scaling in generating stations . Hirsh R (1989). Technology and Transformation in the Electric Utility Industry, Cambridge University Press. Munson, 2005. From Edison to Enron: The Business of Power and What It Means for the Future of Electricity by Richard Munson, NY: Praeger. Smil V. 2010. Energy Transitions. NY: Praeger.

Page 96: Exponential change: what drives it, what does it tell us about the future?

li Koh, H. and Magee, C. 2008. A functional approach for studying technological progress: Extension to energy technology, Technological Forecasting and Social Change 75: 735-758. American Electric Power, Transmission Facts. lii Data on capital cost per output is from (Hirsh, 1999). Edison’s Pearl Street Station Plant in 1880 was about 100 kw. Benefits from geometrical scaling can also be seen in the price per kilo-watt of existing diesel generators. For example, the price of a large Cummins engine (2250 kw) is less than 20% that of smaller ones (e.g., 7 kw) on a per unit output basis. see the data on the following site: http://www.generatorjoe.net/store.aspliii Most of these combined cycle turbines were made possible advances in jet engines Munson, 2005. From Edison to Enron: The Business of Power and What It Means for the Future of Electricity by Richard Munson, NY: Praeger. liv Intergovernmental Panel on Climate Change, Aviation and the Global Atmosphere, Chapter 7lv http://en.wikipedia.org/wiki/Oil_tankers; http://en.wikipedia.org/wiki/Douglas_DC-1lvi Like engines, there is greater demand for smaller aircraft and ships than smaller ones and thus differences in demand are not driving these differences in price per outputlvii UNCTD, 2006. United Nations Conference on Trade and Development, Review of Maritime

Transportlviii http://en.wikipedia.org/wiki/Oil_tankers; http://en.wikipedia.org/wiki/Douglas_DC-1lix Freeman C and Soete, L 1997. The Economics of Industrial Revolution, MIT Press. Cardwell D 2001. Wheels, Clocks, and Rockets: A History of Technology, NY: W. W. Norton. Crump, T. 2001. Science: as seen through the development of scientific instruments, London: Constable and Robinson. McCllellan J and Dorn H 2006. Science and Technology in World History, Baltimore: Johns Hopkins University Press.