1) the nature of the supercomputer business; 2) its importance; 3 ...€¦ · 25-02-1988 ·...
TRANSCRIPT
* i 7538Y - Draft #6-FCONGRESSIONAL ECONOMIC LEADERSHIP INSTITUTE - 2/25/88R. M. Price
INTRODUCTION
I welcome this opportunity to talk about supercomputers—the alpha
and omega of high technology.
Most of us have some idea of what a supercomputer is I think it's
appropriate to begin with a definition.
If you were to ask the engineers and scientists who work with
supercomputers for a definition, they would probably say: "A
supercomputer is just below what we currently need."
A less whimsical and more pragmatic definition is: A supercomputer
is the most powerful computer available at any given state of
computer technology.
Today I would like to discuss four aspects of supercomputers:
1) The nature of the supercomputer business; 2) Its importance;
3) Parallelism and the growth in computing power; and, 4) Actions
the U.S. government can take to help ensure continued American
superiority in supercomputing.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 2 -
THE HIGH-RISK NATURE OF SUPERCOMPUTERS
The most significant characteristic — in fact probably the only
one — to remember about the nature of this business is high
risk — very high risk. Supercomputers are the highest risk part
of the high risk computer industry because they involve both
extraordinary technical risks and commercial risks.
Since supercomputers are the highest performance machines for any
given state of technology, the technical risk may seem obvious.
But the risk goes far beyond that. In building the most powerful
system possible there is an extremely subtle balance that must be
struck between architecture and circuit technology. In
oversimplified terms this is because the design of a supercomputer
takes roughly four years. So, you are aiming at a point four
years out. Trying to anticipate the state of technology that far
out, can lead and has led to an absolute dead end. On the other
hand, working only with proven technology will almost surely
preclude the successful implementation of advanced architectural
concepts.
Moreover, any design that truly moves out the frontiers of
computing will require new software. That compounds the already
great hardware risk.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 3 -
As to the commercial risk, to recover development costs, a company
must be able to sell into the world market. Access to that market
for supercomputers is obviously a political as well as an economic
matter. Also, because of procurement cycles, development delays
are highly leveraged in terms of available market windows. The
past is replete with examples of all this technical and commercial
risk and failure.
The supercomputer story really begins with the efforts of IBM and
Univac in the middle 1950's, which produced computers known as the
LARC (for the Lawrence Livermore Lab) and the STRETCH (for Los
Alamos). LARC and STRETCH were shared risk developments, with the
government contracting in advance for the machines. The vendors1
risks involved agreeing to a fixed price and guarantee of
completion of the contract. Both these early projects suffered
from the soon to be familiar experience in supercomputer
development of underestimating complexity and technological
challenges. Although in both cases something was finally
delivered, both attempts can only be labeled as financial
disasters.
The unquestioned beginning of supercomputers as a distinct class
came in 1964 with the delivery of the initial Control Data 6600
computers. It was in connection with this event that the term
"supercomputer" actually first came into use. The great success
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 4 -
of the 6600 has obscured a crucial matter. It was designed
twice. Seymour Cray, who worked for Control Data at the time,
misjudged in trying to anticipate circuit technology. The
development flopped. He had to start over.
Also lost in the mists of the past is that even when the
successful design was completed, the machine couldn't be produced
in any volume and ultimately it was, in effect, designed a 3rd
time. On top of that, its design peculiarities were such that the
most common customer complaint for the first five years of its
existence was that its software yielded only 30-40% of its
potential power.
Nevertheless, the supercomputer era was launched. The realization
of computational rates in the "megaflop" range led immediately to
demands by high energy physicists for even greater horsepower.
With the potential of the 6600 barely digested, the government
labs set forth needs which led to more advanced designs involving
"multiprocessors" and "vector processors." Out of this effort
came the Texas Instruments "Advanced Scientific Computer"
development, the Burroughs "Illiac IV" and the Control Data
"Star-100" projects. The hope of developers and customers alike
was that these machines would be available as early as 1970-71.
Such was not to be the case.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 5 -
Meanwhile at Control Data, Seymour was also designing a new
machine — the 6800. The machine was to be a compatible extension
of the 6600 and four times faster. It too failed. Once again it
was necessary to start over and by using a different design —
incompatible with the 6600 — the desired performance was
achieved. But, because of the incompatability, the software costs
were enormous.
This redesigned 6800 was called the 7600 and enjoyed an
extraordinarily long run as the world's most powerful computer.
The reason for that longevity was that the TI-ASC effort failed
and was abandoned. The ILLIAC-IV failed and was abandoned. The
STAR-100 failed and was redesigned, redesigned, and redesigned
again until finally in 1979, the Cyber 205 appeared. It hardly
seems necessary to remind you that hundreds of millions of dollars
were involved in all this.
In those same years Seymour's own effort to build the follow-on to
the 7600 — a machine to be called the 8600 — had also failed.
Again the problem was the circuit technology selection. But this
time the selection was too conservative rather than too
ambitious. At that point Seymour left Control Data and a year or
so later established Cray Research. Having learned from the 8600
experience, he produced the imminently successful CRAY-1.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 6 -
By the start of the 80's, after 10-12 years of effort, the
"radical" vector and multi and parallel processing architectures
began to take hold with the advent of the CRAY XMP, and more
recently the ETA-10 and a variety of less capable machines.
I have dwelt on this history at some length only to emphasize the
point I made at the beginning. In supercomputer development,
there have been more failures than successes. To date, every
successful machine, except the Cray XMP and the ETA-10, has been
designed at least twice before it succeeded. (I can't speak to
the Japanese experience).
The U.S. lead in supercomputers has been established, then, not
only by the technical expertise of some truly remarkable people
but equally by management perseverance and very enlightened
cooperation between government and industry.
The technical and marketing risks make the supercomputing industry
similar to the aerospace industry. The analogy can be carried
even further. The cost to design, develop and manufacture a new
generation supercomputer is roughly the same as the cost to
design, develop and manufacture a new fighter aircraft. Both
industries are a critical part of America's defense system.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 7 -
Now, imagine what would happen if the official position of the
U.S. government toward the aerospace industry was: "We need an
airplane. Why don't you spend $200 million to develop one? If we
like it, we might buy some."
No company would build an airplane. So, the government routinely
funds not only the companies they buy aircraft from but also
others to compete with them so they are not at the mercy of any
one supplier.
The risk in the supercomputer industry can be reduced in the same
way. Government would guarantee the purchase of a number of
supercomputers as an investment in national defense. The
guarantee, of course, is contingent on a product that performs to
specification.
But this is assured from the beginning. Just as the American
aerospace industry knows how to build airplanes, the U.S.
supercomputer industry knows how to build supercomputers. We've
learned the hard way through a quarter of a century of experience.
THE IMPORTANCE OF SUPERCOMPUTERS
Why is all this so important to U.S. competitiveness and general
economic health?
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 8 -
I opened my remarks by calling supercomputers the alpha and omega
of high technology. They are not only the ultimate embodiment of
the most advanced electronics technology; they are in fact the
driving force behind new advances across a broad spectrum of
technologies.
Indeed, supercomputers are equally, if not more important, in the
technological advances they spawn as they are in their role as
engines of computation. Those advances ultimately find their way
into the mainstream of computers from mainframes to workstations
and personal computers.
For the sake of time I won't give you a litany of the
architectural concepts, semi-conductor devices, packaging and
cooling technologies that have been generated and come into common
use as a result of the desire to produce the highest possible
level of computing performance. But no matter where you look
across the computer industry you find technology and processes
spawned a decade or more ago by supercomputer development.
The role of the supercomputer as the engine of technological
advance, however, goes far beyond the computer and semiconductor
industries.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 9 -
As they are applied to problems at the frontiers of knowledge in
other disciplines, these computers speed the advance of technology
in aerospace, automotive, biotechnology, petroleum and many other
industries.
Weather modeling was one of the earliest applications to which
supercomputers were applied. Each advance in computing capability
has given rise to more advanced mathematical weather models, but
the need is still greater than any existing or planned
supercomputer can fulfill.
In a recent survey of U.S. professional airplane pilots, a major
safety concern was better weather information, especially wind
shear. Many pilots consider wind shear a more serious problem
than overcrowding in the skies.
There are many other such examples I could cite. But what it
comes down to is this: the nation that leads in information
processing technology is destined to be the competitive leader in
world trade, it will be the nation that generates new ideas in
research sciences from high energy physics to genetic
engineering, it will be the nation that is capable of producing
new advances in military technology; and it will be the nation
that brings more new products to market.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 10 -
As I said, supercomputers are both the alpha and omega of high
technology.
PARALLELISM AND GROWTH IN COMPUTING POWER
On that note, let me turn to parallel computing. The recent media
attention given this topic would lead you to believe that
computers until now were all based on an architecture not
involving parallelism; that those of the future will be; that we
have reached a watershed in computing.
Nothing could be more misleading. The Von-Neuman computer concept
in no way dictates the number of things going on in a computer at
the same time. The entire history of computer design is one of
increasing parallelism or more accurately stated, concurrency.
There are only two ways to increase the effective speed of a
computer: one is faster circuits, the other is greater
concurrency — i.e. having more things going on at the same time.
The limiting constraints on the former are physical — ultimately
the speed of light itself. The constraint on the latter is
increasing complexity of control. It's very straightforward —
the more things that are going on simultaneously the harder it is
to assure that they are helping one another or at least not
hindering one another.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 11 -
Anyone chairing a large committee understands the problem.
Almost at the beginning of computer history someone observed that
while a computer is working on one instruction, it might as well
be getting ready for another instruction. Next, someone noted
that you might as well be fetching the data for the following
instruction while working on the one before.
Over time, more sophisticated versions of data fetch and
instruction staging evolved, giving rise to now familiar design
concepts such as multiple functional units, vector processing and
multiple processors.
But to illustrate the crucial point — complexity of control —
let me consider parallelism by comparing different ways of mowing
your lawn.
Without using any parallelism, you could simply hire one person
with one lawnmower to do the job. But you have a large lawn and
it will take one person four hours. To shorten the time, you
could contract with four people to do the job — one person for
each side of your house. The control is simple: "A" mows front,
"B" mows back, "C" mows left side, "D" mows right side. In this
case, you have used parallel processing to reduce the time to mow
your lawn from four hours to one.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 12 -
Let's push this method and hire 240 people to mow the lawn. Can
you expect the job will be done in one minute?
Not exactly. The problem is you must spend a lot of time
contacting each of these people and telling them what to do so
that they aren't running over each other with their lawnmowers.
With 240 workers, the simple job of mowing the lawn becomes a
major task of control.
In state-of-the-art computing today, we know how to manage modest
levels of parallelism. But we don't know how to manage large
numbers of parallel processors effectively.
Everybody in the supercomputer industry is working on
parallelism. We're all at about the same level — 4 to 8
processors.
The next round will probably produce machines with anywhere from
16 to 64 processors. Effective application beyond that to general
purpose computing is not understood today. Ultimately it
certainly will be. It is equally certain that there will be many
failures along the way.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 13 -
Parallelism is an area where the U.S. clearly leads Japanese
manufacturers. All of the American supercomputer manufacturers
have a much better understanding of the software management and
operating problems of multiple systems.
Nevertheless, we are feeling the pressure of a concerted Japanese
effort to become the world leader in supercomputer technology.
What's even more significant is the Japanese not only are building
supercomputers for sale, they are embedding them into their
infrastructure. By placing supercomputers in both their
universities and workplaces, the Japanese are gearing up their
industries to use this very powerful tool.
Despite a greater awareness in the public and private sectors of
the importance of maintaining a lead in supercomputers, which
followed the FCCSET report of 1983. The U.S. still has not
regained the momentum lost in the decade from 1972 to 1982 or
built the necessary industry infrastructure.
There are several reasons for this.
First, we have become dependent on foreign sources for much of the
technology of supercomputers. Second, we have not restored the
breadth of policy or the practices that led to our past success.
Finally, there is insufficient recognition of the importance of
foreign markets in the economics of supercomputers.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 14 -
The Japanese have had no trouble understanding that advanced
technology is both the result of and the underpinning of
supercomputer development. They also understand world market
dynamics as well as the weakness of the U.S. in both areas.
ACTIONS THE U.S. SHOULD TAKE
The obvious question at this point is: What can the U.S.
government do to enhance the competitiveness of the U.S.
supercomputer industry?
There needs to be an ongoing rigorous dialogue with Japan to
achieve a level playing field in supercomputer trade. The
supercomputer agreement reached last year was a good start. This
dialogue should cover the whole range of issues from market access
to predatory pricing practices. I know these are tough problems,
but that doesn't mean they should be put in the "too hard" file.
Protective tariffs or government subsidies to prop up the domestic
supercomputer industry are not the answer.
What's really needed is a proactive, affirmative government policy
of supporting technological excellence in supercomputing.
This policy must take at least three forms:
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 15 -
First, it should establish a formal program of assigning promising
supercomputer design proposals to specific laboratories and
agencies that will procure and integrate these systems into their
working environments. The procurement of supercomputers that
satisfy design and performance specifications should be guaranteed.
Two, it should relax and simplify export control procedures.
Export control policy is driven by defense needs, economic
concerns, and international relations. However, the concept of
"National Security Interest" has been taken to mean only the first
of these factors. Any broader interpretation will demand improved
communication and cooperation between industry and government.
This is equally true for the more mundane, but equally crucial
task of improving and speeding the licensing process itself.
Third, the policy should stipulate that the DOD, DOE, NSF and
other agencies give broader support to U.S. university
procurements of U.S. supercomputers. (There are still today more
supercomputers in Japanese universities than in our own
universities).
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>
- 16 -
Those, then, are three recommendations for a government policy to
help ensure the continued strength of the U.S. computer industry.
The uniqueness of supercomputers, both technologically and
economically, demands creative and cooperative approaches between
industry and government to development, procurement and use. If
we act together on this reality, we can preserve the lead we have
in supercomputing.
To quote the President's Commission on Industrial
Competitiveness: "Technology propels our economy forward."
And I would add: supercomputers propel technology.
Thank you.
R M Price CDC speeches Charles Babbage Institute <www.cbi.umn.edu>