high performance computing and networking for scienceartificial intelligence lab microelectronics...

45
High Performance Computing and Networking for Science September 1989

Upload: others

Post on 01-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

High Performance Computing andNetworking for Science

September 1989

Page 2: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Recommended Citation:

U.S. Congress, Office of ‘Technology Assessment, High Performance Computing andNetworkig for Science-Background Paper, OTA-BP-CIT-59 (Washington, DC: U.S.Government Printing Office, September 1989).

Library of Congress Catalog Card Number 89-600758

For sale by the Superintendent of DocumentsU.S. Government Printing Office, Washington, DC 20402-9325

(Order form can be found in the back of this report.)

Page 3: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Foreword

Information technology is fundamental to today’s research and development:high performance computers for solving complex problems; high-speed datacommunication networks for exchanging scientific and engineering information; verylarge electronic archives for storing scientific and technical data; and new displaytechnologies for visualizing the results of analyses.

This background paper explores key issues concerning the Federal role insupporting national high performance computing facilities and in developing anational research and education network. It is the first publication from ourassessment, Information Technology and Research, which was requested by the HouseCommittee on Science and Technology and the Senate Committee on Commerce,Science, and Transportation.

OTA gratefully acknowledges the contributions of the many experts, within andoutside the government, who served as panelists, workshop participants, contractors,reviewers, detailees, and advisers for this document. As with all OTA reports,however, the content is solely the responsibility of OTA and does not necessarilyconstitute the consensus or endorsement of the advisory panel, workshop participants,or the Technology Assessment Board.

- D i r e c t o r

. . .w

Page 4: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Performance Computing and Networking for Science Advisory Panel

Charles BenderDirectorOhio Supercomputer Center

Charles DeLisiChairmanDepartment of Biomathematical

ScienceMount Sinai School of Medicine

Deborah L. EstrinAssistant ProfessorComputer Science DepartmentUniversity of Southern California

Robert EwaldVice President, SoftwareCray Research, Inc.

Kenneth flammSenior FellowThe Brookings Institution

Malcolm GetzAssociate ProvostInformation Services & TechnologyWnderbilt University

Ira GoldsteinVice president ResearchOpen Software Foundation

Robert E. KrautManagerInterpersonal Communications GroupBell Communications Research

John P. (Pat) Crecine, ChairmanPresident, Georgia Institute of Technology

Lawrence LandweberChairmanComputer Science DepartmentUniversity of Wisconsin-Madison

Carl LedbetterPresident/CEOETA Systems

Donald MarshVice President, TechnologyContel Corp.

Michael J, McGillVice PresidentTechnical Assessment & DevelopmentOCLC, Computer Library Center, Inc.

Kenneth W. NevesManagerResearch & Development ProgramBoeing Computer Services

Bernard O’LearManager of SystemsNational Center for Atmospheric

Research

William PoduskaChairman of the BoardStellar Computer, Inc.

Elaine RichDirectorArtificial Intelligence LabMicroelectronics and Computer

Technology Corp.

Sharon J. RogersUniversity LibrarianGelman LibraryThe George Washington University

William SchraderPresidentNYSERNET

Kenneth ToyPost-Graduate Research

GeophysicistScripps Institution of Oceanography

Keith UncapherVice PresidentCorporation for the National

Research Initiatives

Al WeisVice PresidentEngineering & Scientific ComputerData Systems DivisionIBM Corp.

NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel. The views expressed inthis OTA background paper, however, are the sole responsibility of the Office of Technology Assessment.

iv

Page 5: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

—. .--.—— -—

OTA Project Staff-High Performance Computing

John Andelin, Assistant Director, OTAScience, Information, and Natural Resources Division

James W. Curhin, Program ManagerCommunication and Information Technologies Program

Fred W. Weingarten, Project Director

Charles N. Brownstein, Senior Analyst1

Lisa Heinz, Analyst

Elizabeth I. Miller, Research Assistant

Administrative Staff

Elizabeth Emanuel, Administrative Assistant

Karolyn Swauger, Secretary

Jo Anne Price, Secretary

Other Contributors

Bill Bartelone Mervin Jones Timothy LynaghLegislative/Federal Program Program Analyst Supervisory Data and

Manager Defense Automation Resources Program AnalystCray Research, Inc. Information Center Government Services Administration

Page 6: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

List of Reviewers

Janice AbrahamExecutive DirectorCornell Theory CenterCornell University

Lee R. AlleyAssistant Vice President for

Information Resources ManagementArizona State University

James AlmondDirectorCenter for High Performance

ComputingBalcones Research Center

Julius ArchibaldDepartment ChairmanDepartment of Computer ScienceState University of New YorkCollege of Plattsburgh

J. Gary AugustsonExecutive DirectorComputer and Information

SystemsPennsylvania State University

Philip AustinPresidentColorado State University

Steven C. BeeringPresidentPurdue University

Jerry BerkmanFortran SpecialistCentral Computing ServicesUniversity of California at Berkeley

Kathleen BernardDirector for Science Policy

and Technology ProgramsCray Research, Inc.

Justin L. BloomPresidentTechnology International, Inc.

Charles N. BrownsteinExecutive OfficerComputing & Information Science &

EngineeringNational Science Foundation

Eloise E. ClarkVice President, Academic

AffairsBowling Green University

Paul ColemanProfessorInstitute of Geophysics and

Space PhysicsUniversity of California

Michael R. DingersonAssociate Vice Chancellor for

Research and Dean of theGraduate School

University of Mississippi

Christopher EoyangDirectorInstitute for Supercomputing

Research

David FarberProfessorComputer & Information

Science DepartmentUniversity of Pennsylvania

Sidney FernbachIndependent Consultant

Susan FratkinDirector, Special ProgramsNASULGC

Doug GaleDirector of Computer

Research CenterOffice of the ChancellorUniversity of Nebraska-Lincoln

Robert GillespiePresidentGillespie, Folkner

& Associates, Inc.

Eiichi GotoDirector, Computer CenterUniversity of Tokyo

C.K. GunsalusAssistant Vice Chancellor

for ResearchUniversity of Illinois

at Urbana-Champaign

Judson M. HarperVice President of ResearchColorado State University

Gene HempSenior Associate V.P. for

Academic AffairsUniversity of Florida

Nobuaki IedaSenior Vice PresidentNTT America, Inc.

Hiroshi InoseDirector GeneralNational Center for Science

Information System

Heidi JamesExecutive SecretaryUnited States Activities

BoardIEEE

Russell C. JonesUniversity Research ProfessorUniversity of Delaware

Brian Kahin, Esq.Research Affiliate on

Communications PolicyMassachusetts Institute of Technology

Robert KahnPresidentCorporation of National

Research Initiatives

Hisao KanaiExecutive Vice PresidentNEC Corporation

Hiroshi KashiwagiDeputy Director-GeneralElectrotechnical Laboratory

Lauren KellyDepartment of Commerce

Thomas KeyesProfessor of ChemistryBoston University

Continued on next page

VI

Page 7: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

List Of Reviewers (continued)

Doyle KnightPresidentJohn von Neumann National

Supercomputer CenterConsortium for Scientific

Computing

Mike LevineCo-director of the Pittsburgh

Supercomputing CenterCarnegie Mellon University

George E. LindamoodProgram DirectorIndustry ServiceGartner Group, Inc.

M. Stuart LynnVice President for

Information TechnologiesCornell University

Ikuo MakinoDirectorElectrical Machinery & Consumer

Electronics DivisionMinisry of International

Trade and Industry

Richard MandelbaumVice Provost for ComputingUniversity of Rochester

Martin MassengaleChancellorUniversity of N’ebraska-Lincoln

Gerald W. MayPresidentUniversity of New Mexico

Yoshiro MikiDirector, Policy Research DivisionScience and Technology Policy BureauScience and Technology Agency

Takeo MiuraSenior Executive Managing DirectorHitachi. Ltd.

J. Gerald MorganDean of EngineeringNew Mexico State University

V. Rama MurthyVice Provost for Academic

AffairsUniversity of Minnesota

Shoichi NinomoiyaExecutive DirectorFujitsu Limited

Bernard O’LearManager of SystemsNational Center for Atmospheric

Research

Ronald OrcuttExecutive DirectorProject AthenaMIT

Tad PinkertonDirectorOffice of Information

TechnologyUniversity of Wisconsin-Madison

Harold J. RavechePresidentStevens Institute of Technology

Ann RedelfManager of Information

ServicesCornell Theory CenterCornell University

Glenn RicartDirectorComputer Science CenterUniversity of Maryland in

College Park

Ira RicherProgram ManagerDARPA/ISTO

John RiganatiDirector of Systems ResearchSupercomputer Research CenterInstitute for Defense Analyses

Mike RobertsVice PresidentEDUCOM

David RosellePresidentUniversity of Kentucky

Nora SabelliNational Center for Supercomputing

ApplicationsUniversity of Illinois at

Urbana-Champaign

Steven SamplePresidentSUNY, Buffalo

John SellPresidentMinnesota Supercomputer Center

Hiroshi ShimaDeputy Director-General for

Technology AffairsAgency of Industrial Science and

Technology , MITI

Yoshio ShimamotoSenior Scientist (Retired)Applied Mathematics DepartmentBrookhaven National Laboratory

Charles SorberDean, School of EngineeringUniversity of Pittsburgh

Harvey StoneSpecial Assistant to the

PresidentUniversity of Delaware

Dan SulzbachManager, User ServicesSan Diego Supercomputer Center

Tatsuo TanakaExecutive DirectorInteroperability TechnologyAssociation for Information Processing,

Japan

Ray TolandpresidentAlabama Supercomputing Network

Authority

Kenneth ToloVice ProvostUniversity of Texas

at Austin

Kenneth ToyPost-Graduate Research

GeophysicistScripps Institution of

Oceanography

August B. Tumbull. IIIProvost & Vice President,

Academic AffairsFlorida State University

Continued on next page

Page 8: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

List of Reviewers (continued)

Gerald TurnerChancellorUniversity of Mississippi

Douglas Van HouwelingVice Provost for Information

& TechnologyUniversity of Michigan

Anthony VillasenorProgram ManagerScienCE NetworksOffice of Space Science and

ApplicationsNational Aeronautics and

Space Administration

Hugh WalshData Systems DivisionIBM

Richard WestAssistant Vice President,

IS&ASUniversity of California

Steve WolffProgram Director for NetworkingComputing & Information Science &

EngineeringNational Science Foundation

James WoodwardChancellorUniversity of North Carolina

at Charlotte

Akihiro YoshikawaResearch DirectorUniversity of California, BerkeleyBRIE/IIS

NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel, The views expressed inthis OTA background paper, however, are the sole responsibility of the Office of Technology Assessment,

,..VIII

Page 9: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

. .. .. .

ContentsPage

Chapter 1: Introduction and OverviewObservations . . . . . . . . . . . . . . . . . . . . . . . . . . 1RESEARCH AND INFORMATION

TECHNOLOGY-A FUTURESCENARIO . . . . . . . . . . . . . . . . . . ., . . . . . . 1

MAJOR ISSUES AND PROBLEMS . . . . . . . 1NATIONAL IMPORTANCE-THE NEED

FOR ACTION . . . . . . . . . . . . . . . . . . . . . . . . 3Economic Importance ., . . . . . . . . . . . . . . . . 3Scientific Importance . . . . . . . . . . . . . . . . . . . 4Timing . . . . . . . . . . . ... , . . . . . . . . . . . . . . . 4Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Collaborators ... , . . . . . . . . . . . . . . . . . . . . . 5Service Providers ., . . . . . . . . . . . . . . . . . . . 5

Chapter 2: High Performance Computers . . . 7WHAT IS A HIGH PERFORMANCE

COMPUTER? . . . . . . . . . > . . . . . . . . . . . . . . 7HOW FAST IS FAST? . . . . . . . . . . . . . . . . . . . 8THE NATIONAL SUPERCOMPUTER

CENTERS . . . . . . . . . . . . . . . . . . . . . . . . . , . 9The Cornell Theory Center . . . . . . . . . . . . . 9The National Center for Supercomputing

Applications . . . . . . . . . . . . . . . . . . . . . . . 10Pittsburgh Supercomputing Center . . . . . . 10San Diego Supercomputer Center . . . . . . . 10John von Neumann National

Supercomputer Center . . . . . . . . . . . . . . 11OTHER HPC FACILITIES . . . . . . . . . . . . . . 11

Minnesota Supercomputer Center . . . . . . . 11The Ohio Supercomputer Center . . . . . . . . 12Center for High Performance Computing,

Texas (CHPC) . . . . . . . . . . . . . . . . . . . . . 12Alabama Supercomputer Network . . . . . . . 13Commercial Labs . . . . . . . . . . . . . . . . . . . . 13Federal Centers . . . . . . . . . . . . . . . . . . . . . . 13

CHANGING ENVIRONMENT . . . . . . . . . . 13REVIEW AND RENEWAL OF THE NSF

CENTERS . . . . . . . . . . . . . . . . . . . . . . . . . . 15THE INTERNATIONAL ENVIRONMENT . 16

Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Page

Other Nations . . . . . . . . . . . . . . . . , . . . . . . 19Chapter 3: Networks . . . . . . . . . . . . . . . . . . . . . 21

THE NATIONAL RESEARCH ANDEDUCATION NETWORK (NREN) . . . . . 22The Origins of Research Networking . . . . 22The Growing Demand for Capability and

Connectivity . . . . . . . . . . . . . . . . . . . . , . . 23The Present NREN . . . . . . . . . . . . . . . . . . . 23Research Networking as a Strategic

High Technology Infrastructure . . . . . . . 25Federal Coordination of the Evolving

Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Players in the NREN . . . . . . . . . . . . . . . . . . 26The NREN in the International

Telecommunications Environment . . . . 28Policy Issues . . . . . . . . . . . . . . . . . . . . . . . . 28Planning Amidst Uncertainty , ., . . . . . ., . 29Network Scope and Access . . . . . . . . . . . . 29Policy and Management Structure . . . . . . . 31Financing and Cost Recovery . . . . . . . . . . 32Network Use . . . . . . . . . . . . . . . . . . . . . . . . . 33Longer-Term Science Policy Issues . . . . . 33Technical Questions . . . . . . . . . . . . . . . . . . 34Federal Agency Plans:

FCCSET/FRICC . . . . . . . . . . . . . . . . . . . 34NREN Management Desiderata . . . . . . . . . 35

FiguresFigure Page

1-1. An Information Infrastructure forResearch . . . . . . . ., ., ... , . . . . . . . . . . . . . . . 2

1-2. Distribution of Federal Supercomputers . . . 14

TablesTable Page

2-1. Some Key Academic High PerformanceComputer Installations . . . . . . . . . . . . . . . . . . 12

3-1. Principal Policy Issues in NetworkDevelopment . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3-2. Proposed NREN Budget . . . . . . . . . . . . . . . . 36

ix

Page 10: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Chapter 1

Introduction and Overview Observations

The Office of Technology Assessment is conduct-ing an assessment of the effects of new informationtechnologies—including high performance comput-ing, data networking, and mass data archiving-onresearch and development. This background paperoffers a midcourse view of the issues and discussestheir implications for current discussions aboutFederal supercomputer initiatives and legislativeinitiatives concerning a national data communica-tion network.

Our observations to date emphasize the criticalimportance of advanced information technologyto research and development in the United States,the interconnection of these technologies into anational system (and, as a result, the tightercoupling of policy choices regarding them), andthe need for immediate and coordinated Federalaction to bring into being an advanced informa-tion technology infrastructure to support U.S.research, engineering, and education.

RESEARCH AND INFORMATIONTECHNOLOGY—A FUTURE

SCENARIOWithin the next decade, the desks and laboratory

benches of most scientists and engineers will beentry points to a complex electronic web of informa-tion technologies, resources and information serv-ices, connected together by high-speed data commu-nication networks (see figure 1-1 ). These technolo-gies will be critical to pursuing research in mostfields. Through powerful workstation computers ontheir desks, researchers will access a wide variety ofresources, such as:

an interconnected assortment of local campus,State and regional, national, and even intern-ational data communication networks that linkusers worldwide;specialized and general-purpose computers in-cluding supercomputers, minisupercomputers,mainframes, and a wide variety of specialarchitectures tailored to specific applications;collections of application programs and soft-ware tools to help users find, modify, ordevelop programs to support their research;

-1

archival storage systems that contain spe-cialized research databases;experimental apparatus-such as telescopes,environmental monitoring devices, seismographs,and so on---designed to be set-up and operatedremotely;services that support scientific communication,including electronic mail, computer confer-encing systems, bulletin boards, and electronicjournals;a “digital library” containing reference mate-rial, books, journals, pictures, sound record-ings, films, software, and other types of infor-mation in electronic form; andspecialized output facilities for displaying theresults of experiments or calculations in morereadily understandable and visualizable ways.

Many of these resources are already used in someform by some scientists. Thus, the scenario that isdrawn is a straightforward extension of currentusage. Its importance for the scientific communityand for government policy stems from three trends:1 ) the rapidly and continually increasing capabilityof the technologies; 2) the integration of thesetechnologies into what we will refer to as an“information infrastructure”; and 3) the diffusion ofinformation technology into the work of mostscientific disciplines.

Few scientists would use all the resources andfacilities listed, at least on a daily basis; and theparticular choice of resources eventually madeavailable on the network will depend on how thetastes and needs of research users evolve. However,the basic form, high-speed data networks connectinguser workstations with a worldwide assortment ofinformation technologies and services, is becominga crucial foundation for scientific research in mostdisciplines.

MAJOR ISSUES AND PROBLEMSDeveloping this system to its full potential will

require considerable thought and effort on the part ofgovernment at all levels, industry, research institu-tions, and the scientific community, itself, It willpresent policy makers with some difficult questionsand decisions.

Page 11: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

2

Figure l-l—An Information Infrastructure for Research

On-1ine

ex p e r i m e n ts

i W o r k s t a t i o n s

M a i n f ra mes

\

/

tI

t Electronic mail I

/

&Bulletin in boards

1‘ — I

Digital electronic

/Iubraries

Net works /

& Data a rch ivesA s s o c i a t e d S e r v i c e s II 1/

\

Scientific applications are very demanding on . new methods for storing and accessingtechnological capability. A substantial R&D com- mation horn very large data archives.ponent will need to accompany programs in-tended to advance R&D use of information An important characteristic of this systemtechnology. To realize the potential benefits of this different parts of it will be funded and operated bynew infrastructure, research users need advances in different entities and made available to users insuch areas as: different ways. For example, databases could be

in for-

is that

operated by government agencies, professional soci -more powerful computer designs; eties, non-profit journals, or commercial firms.—

Computer facilities could similarly be operated bymore powerful and efficient computationaltechniques and software; overly high-speed

government, industry, or universities. The network,

switched data communications;itself, already is an assemblage of pieces funded oroperated by various agencies in the Federal Govem-

improved technologies for visualizing data ment; by States and regional authorities; and by localresults and interacting with computers; and agencies, firms and educational institutions. Keep-

Page 12: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

3

ing these components interconnected technologi-cally and allowing users to move smoothly amongthe resources they need will present difficultmanagement and policy problems.

Furthermore, the system will require significantcapital investment to build and maintain, as well asspecialized technical expertise to manage. How thevarious components are to be funded, how costsare to be allocated, and how the key componentssuch as the network will be managed over thelong term will be important questions.

Since this system as envisioned would be sowidespread and fundamental to the process ofresearch, access to it would be crucial to participa-tion in science. Questions of access and participa-tion are crucial to planning, management, andpolicymaking for the network and for many ofthe services attached to it.

Changes in information law brought about by theelectronic revolution will create problems and con-flicts for the scientific community and may influ-ence how and by whom these technologies are used.The resolution of broader information issuessuch as security and privacy, intellectual prop-erty protection, access controls on sensitive infor-mation, and government dissemination practicescould affect whether and how information tech-nologies will be used by researchers and who mayuse them.

Finally, to the extent that, over the long run,modem information technology becomes so funda-mental to the research process, it will transform thevery nature of that process and the institutions—libraries, laboratories, universities, and so on—thatserve it. These basic changes in science wouldaffect government both in the operation of its ownlaboratories and in its broader relationship as asupporter and consumer of research. Conflictsmay also arise to the extent that governmentbecomes centrally involved, both through fund-ing and through management with the tradition-ally independent and uncontrolled communicationchannels of science.

NATIONAL IMPORTANCE—THE NEED FOR ACTION

Over the last 5 years, Congress has becomeincreasingly concerned about information technol-ogy and research. The National Science Foundation(NSF) has been authorized to establish supercom-puter centers and a science network. Bills (S 1067HR 3131) are being considered in the Congress toauthorize a major effort to plan and develop anational research and education network and tostimulate information technology use in science andeducation. Interest in the role information technol-ogy could play in research and education hasstemmed, first, from the government’s major role asa funder, user, and participant in research and,secondly, from concern for ensuring the strength andcompetitiveness of the U.S. economy.

Observation 1: The Federal Government needs to

is

establish its commitment to the advanced infor-mation technology infrastructure necessary forfurthering U.S. science and education. This needsterns directly from the importance of science andtechnology to economic growth, the importanceof information technology to research and devel-opment, and the critical timing for certain policydecisions.

Economic Importance

A strong national effort in science and technologycritical to the long-term economic competitive-

ness, national security, and social well-being of theUnited States. That, in the modem internationaleconomy, technological innovation is concomitantwith social and economic growth is a basic assump-tion held in most political and economic systems inthe world these days; and we will take it here as abasic premise. It has been a basic finding in manyOTA studies.l (This observation is not to suggestthat technology is a panacea for all social problems,nor that serious policy problems are not often raisedby its use.) Benefits from of this infrastructure areexpected to flow into the economy in three ways:

First, the information technology industry canbenefit directly. Scientific use has always been a

I For ~xmple, U,S, Congess, Offiw of Tu~~]o~ As~ssmen[, Techm/ogy ad the A~ri~an Ec’ono~”( TransitIon, OTA-TET-283 (wti.Shlllg(Otl,

DC: U.S. Government Printing Office, May 1988) and fqformafwn Techofogy R&D Criticaf Trend andlswes, OTA-CIT-268 (Washington, DC: U.S.Government Printing Office, February 1985),

Page 13: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

4

major source of innovation in computers and com-munications technology. Packet-switched data com-munication, now a widely used commercial offering,was first developed by the Defense AdvancedResearch Projects Agency (DARPA) to support itsresearch community. Department of Energy (DOE)national laboratories have, for many years, madecontributions to supercomputer hardware and soft-ware. New initiatives to develop higher speedcomputers and a national science network couldsimilarly feed new concepts back to the computerand communications industry as well as to providersof information services.

Secondly, by improving the tools and methodolo-gies for R&D, the infrastructure will impact theresearch process in many critical high technologyindustries, such as pharmaceuticals, airframes, chem-icals, consumer electronics, and many others. Inno-vation and, hence, international competitiveness inthese key R&D-intensive sectors can be improved.

The economy as a whole stands to benefit fromincreased technological capabilities of informationsystems and improved understanding of how to usethem. A National Research and Education Networkcould be the precursor to a much broader highcapacity network serving the United States, andmany research applications developed for highperformance computers result in techniques muchmore broadly applicable to commercial firms.

Scientific Importance

Research and development is, inherently, aninformation activity. Researchers generate, organ-ize, and interpret information, build models, com-municate, and archive results, Not surprisingly,then, they are now dependent on information tech-nology to assist them in these tasks. Many majorstudies by many scientific and policy organizationsover the years-as far back as the President’sScience Advisory Committee (PSAC) in the middle1960s, and as recently as a report by COSEPUP ofthe National Research Council published in 19882—have noted these trends and analyzed the implica-tions for science support. The key points are asfollows:

Ž

Scientific and technical information is increas-ingly being generated, stored and distributed inelectronic form;Computer-based communications and data han-dling are becoming essential for accessing,manipulating, analyzing, and communicatingdata and research results; and,In many computationally intensive R&D areas,from climate research to groundwater modelingto airframe design, major advances will dependupon pushing the state of the art in highperformance computing, very large databases,visualization, and other related informationtechnologies. Some of these applications havebeen labeled “Grand Challenges.” These proj-ects hold promise of great social benefit, suchas designing new vaccines and drugs, under-standing global warming, or modeling theworld economy. However, for that promise tobe realized in those fields, researchers requiremajor advances in available computationalpower.Many proposed and ongoing “big science”projects, from particle accelerators and largearray radio telescopes to the NASA EOSsatellite project, will create vast streams of newdata that must be captured, analyzed, archived,and made available to the research community.These new demands could well overtax thecapability of currently available resources.

Timing

Government decisions being made now and in thenear future will shape the long-term utility andeffectiveness of the information technology infra-structure for science. For example:

NSF is renewing its multi-year commitments toall or most of the existing National Supercom-puting Centers.Executive agencies, under the informal aus-pices of the Federal Research Internet Coordi-nating Committee (FRICC), are developing anational “backbone” network for science. Deci-sions made now will have long term influenceon the nature of the network, its technicalcharacteristics, its cost, its management, serv-

Zpme] on ~om~jon l’lxhnoIo~ and the Conduct of Research, Committee on Science, Engineering, and Public Policy, f~ormarion Technologyand the Conduct of Reseurch ” The User’s View (Washington, DC: National Academy Press, 1989).

Page 14: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

5

ices available on it, access, and the informationpolicies that will govern its use.The basic communications industry is in flux,as are the policies and rules by which gover-nment regulates it.Congress and the Executive Branch are cur-rently considering, and in some cases havestarted, several new major scientific projects,including a space station, the Earth OrbitingSystem, the Hubble space telescope, the super-conducting supercollider, human genome map-ping, and so on. Technologies and policies areneeded to deal with these “firehoses of data. ” Inaddition, upgrading the information infrastruc-ture could open these projects and data streamsto broad access by the research community.

Observation 2: Federal policy in this area needs tobe more broadly based than has been traditionalwith Federal science efforts. Plsnning, building,and managing the information technology infra-structure requires cutting across agency pro-grams and the discipline and mission-orientedapproach of science support. In addition, manyparties outside the research establishment willhave important roles to play and stakes in theoutcome of the effort.

The key information technologies-high per-formance computing centers, data communicationnetworks, large data archives, along with a widerange of supporting software-are used in allresearch disciplines and support several differentagency missions. In many cases, economies of scaleand scope dictate that some of these technologies(e.g., supercomputers) be treated as common re-sources. Some, such as communication networks,are most efficiently used if shared or interconnectedin some way.

There are additional scientific reasons to treatinformation resources as a broadly used infrastruc-ture: fostering communication among scientistsbetween disciplines, sharing resources and tech-niques, and expanding access to databases andsoftware, for instance. However, there are very fewmodels from the history of Federal science supportfor creating and maintaining infrastructure-like re-sources for science and technology across agencyand disciplinary boundaries. Furthermore, since thenetworks, computer systems, databases, and so on

interconnect and users must move smoothly amongthem, the system requires a high degree of coordina-tion rather than being treated as simply a conglomer-ation of independent facilities.

However, if information technology resources forscience are treated as infrastructure, a major policyissue is one of boundaries. Who is it to serve; whoare its beneficiaries? Who should participate indesigning it, building and operating it, providingservices over it, and using it? The answers to thesequestions will also indicate to Congress who shouldbe part of the policymaking and planning process;they will govern the long term scale, scope, and thetechnological characteristics of the infrastructureitself; and they will affect the patterns of support forthe facilities. Potentially interested parties includethe following:

Users

Potential users might include academic and indus-trial researchers, teachers, graduate, undergraduate,and high school students, as well as others such asthe press or public interest groups who need accessto and make use of scientific information. Institu-tions, such as universities and colleges, libraries, andschools also have user interests. Furthermore, for-eign scientists working as part of internationalresearch teams or in firms that operate internatio-nally will wish access to the U.S. system, which, inturn, will need to be connected with other nation’sresearch infrastructures.

Collaborators

Another group of interested parties include Stateand local governments and parts of the informationindustry. We have identified them with the term“collaborators” because they will be participating infunding, building, and operating the infrastructure.States are establishing State supercomputer centersand supporting local and regional networking, somecomputer companies participate in the NSF NationalSupercomputer Centers, and some telecommunica-tion firms are involved in parts of the sciencenetwork.

Service Providers

Finally, to the extent that the infrastructure servesas a basic tool for most of the research anddevelopment community, information service pro-

Page 15: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

6

viders will require access to make their productsavailable to scientific users. The service providersmay include government agencies (which provideaccess to government scientific databases, for exam-ple), libraries and library utilities, journal andtext-book publishers, professional societies, andprivate software and database providers.

Observation 3: Several information policy issueswill be raised in managing and using the network.Depending on how they are resolved, they couldsharply restrict the utility and scope of networkuse in the scientific community.

Security and privacy have already become ofmajor concern and will pose a problem. In general,users will want the network and the services on it tobe as open as possible; however, they will also wantthe networks and services to be as robust anddependable as possible-free free deliberate oraccidental disruption. Furthermore, different re-sources will require different levels of security.Some bulletin boards and electronic mail servicesmay want to be as open and public as possible; othersmay require a high level of privacy. Some databasesmay be unique and vital resources that will need avery high level of protection, others may not be socritical. Maintaining an open, easily accessible

network while protecting privacy and valuableresources will require careful balancing of legal andtechnological controls.

Intellectual property protection in an electronicenvironment may pose difficult problems, Providerswill be concerned that electronic databases, soft-ware, and even electronic formats of printed journalsand other writings will not be adequately protected.In some cases, the product, itself, may not be wellprotected under existing law. In other cases elec-tronic formats coupled with a communicationsnetwork erode the ability to control restrictions oncopying and disseminating.

Access controls may be called for on material thatis deemed to be sensitive (although unclassified) forreasons of national security or economic competi-tiveness. Yet, the networks will be accessibleworldwide and the ability to identify and controlusers may be limited.

The above observations have been broad, lookingat the overall collection of information technologyresources for science as an integrated system and atthe questions raised by it. The remaining portion ofthis paper will deal specifically with high perform-ance computers and networking.

Page 16: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Chapter 2

High Performance Computers

An important set of issues has been raised duringthe last 5 years around the topic of high performancecomputing (H-PC). These issues stem from a grow-ing concern in both the executive branch and inCongress that U.S. science is impeded significantlyby lack of access to HPC1 and by concerns over thecompetitiveness implications of new foreign tech-nology initiatives, such as the Japanese “FifthGeneration Project.” In response to these concerns,policies have been developed and promoted withthree goals in mind.

1.

2.

3.

In

To advance vital research applications cur-rently hampered by lack of access to very highspeed computers.To accelerate the development of new HPCtechnology, providing enhanced tools for re-search and stimulating the competitiveness ofthe U.S. computer industry.To improve software tools and techniques forusing HPC, thereby enhancing their contribu-tion to general U.S. economic competitive-ness.

1984, the National Science Foundation (NSF)initiated a group of programs intended to improvethe availability and use of high performance comput-ers in scientific research. As the centerpiece of itsinitiative, after an initial phase of buying anddistributing time at existing supercomputer centers,NSF established five National Supercomputer Cen-ters.

Over the course of this and the next year, theinitial multiyear contracts with the National Centersare coming to an end, which has provoked a debateabout whether and, if so, in what form they shouldbe renewed. NSF undertook an elaborate review andrenewal process and announced that, depending onagency funding, it is prepared to proceed withrenewing at least four of the centers2. In thinkingabout the next steps in the evolution of the advancedcomputing program, the science agencies and Con-gress have asked some basic questions. Have ourperceptions of the needs of research for HPCchanged since the centers were started? If so, how?

Have we learned anything about the effectiveness ofthe National Centers approach? Should the goals ofthe Advanced Scientific Computing (ASC) andother related Federal programs be refined or rede-fined? Should alternative approaches be considered,either to replace or to supplement the contributionsof the centers?

OTA is presently engaged in a broad assessmentof the impacts of information technology on re-search, and as part of that inquiry, is examining thequestion of scientific computational resources. It hasbeen asked by the requesting committees for aninterim paper that might help shed some light on theabove questions. The full assessment will not becompleted for several months, however; so thispaper must confine itself to some tentative observa-tions.

WHAT ISA HIGH PERFORMANCECOMPUTER?

The term, “supercomputer,” is commonly used inthe press, but it is not necessarily useful for policy.In the first place, the definition of power in acomputer is highly inexact and depends on manyfactors including processor speed, memory size, andso on. Secondly, there is not a clear lower boundaryof supercomputer power. IBM 3090 computerscome in a wide range of configurations, some of thelargest of which are the basis of supercomputercenters at institutions such as Cornell, the Universi-ties of Utah, and Kentucky. Finally, technology ischanging rapidly and with it our conceptions ofpower and capability of various types of machines.We use the more general term, “high performancecomputers,” a term that includes a variety ofmachine types.

One class of HPC consists of very large. powerfulmachines, principally designed for very large nu-merical applications such as those encountered inscience. These computers are the ones often referredto as “supercomputers.” They are expensive, costingup to several million dollars each.

lpe[~~ D, Lm, R~PO~ of the pamj on ~rge.scaje Cowtilng in ~clen~e ~ E~glncerl)lg (Wa.$hlngon, Dc: Na[lOnal science Foundam.m, 1982).

-e of the five centers, the John von Neumann National Supercomputer Center, has been based on ETA-10 tednology Tbc Center hw been askedto resubmit a proposal showing revised plans in reaction to the wnhdrawd of that machme from the markt.

- 7 -

Page 17: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

8

A large-scale computer’s power comes from acombination of very high-speed electronic compo-nents and specialized architecture (a term used bycomputer designers to describe the overall logicalarrangement of the computer). Most designs use acombination of “vector processing” and “parallel-ism” in their design. A vector processor is anarithmetic unit of the computer that produces a seriesof similar calculations in an overlapping, assemblyline fashion, (Many scientific calculations can be setup in this way.)

Parallelism uses several processors, assuming thata problem can be broken into large independentpieces that can be computed on separate processors.Currently, large, mainframe HPC’S such as thoseoffered by Cray, IBM, are only modestly parallel,having as few as two up to as many as eightprocessors. 3 The trend is toward more parallelprocessors on these large systems. Some expertsanticipate as many as 512 processor machinesappearing in the near future. The key problem to datehas been to understand how problems can be set upto take advantage of the potential speed advantage oflarger scale parallelism.

Several machines are now on the market that arebased on the structure and logic of a large supercom-puter, but use cheaper, slower electronic compo-nents. These systems make some sacrifice in speed,but cost much less to manufacture. Thus, an applica-tion that is demanding, but that does not necessarilyrequire the resources of a full-size supercomputer,may be much more cost effective to run on such a“minisuper.”

Other types of specialized systems have alsoappeared on the market and in the research labora-tory. These machines represent attempts to obtainmajor gains in computation speed by means offundamentally different architectures. They are knownby colorful names such as “Hypercubes,” “Connec-tion Machines, “ “Dataflow Processors, “ “ButterflyMachines, “ “Neural Nets,” or “Fuzzy Logic Com-puters.” Although they differ in detail, many of thesesystems are based on large-scale parallelism. That is,their designers attempt to get increases in processingspeed by hooking together in some way a largenumber-hundreds or even thousands-of simpler,

slower and, hence, cheaper processors. The problemis that computational mathematicians have not yetdeveloped a good theoretical or experiential frame-work for understanding in general how to arrangeapplications to take full advantage of these mas-sively parallel systems. Hence, they are still, by andlarge, experimental, even though some are now onthe market and users have already developed appli-cations software for them. Experimental as thesesystems may seem now, many experts think that anysignificantly large increase in computational powereventually must grow out of experimental systemssuch as these or from some other form of massivelyparallel architecture.

Finally, “workstations,” the descendants of per-sonal desktop computers, are increasing in power;new chips now in development will offer thecomputing power nearly equivalent to a Cray 1supercomputer of the late 1970s. Thus, althoughtop-end HPCs will be correspondingly more power-ful, scientists who wish to do serious computing willhave a much wider selection of options in the nearfuture,

A few policy-related conclusions flow from thisdiscussion:

The term “Supercomputer” is a fluid one,potentially covering a wide variety of machinetypes, and the “supercomputer industry” issimilarly increasing y difficult to identify clearly.Scientists need access to a wide range of highperformance computers, ranging from desktopworkstations to full-scale supercomputers, andthey need to move smoothly among thesemachines as their research needs dictate.Hence, government policy needs to be flexibleand broadly based, not overly focused onnarrowly defined classes of machines.

HOW FAST IS FAST?Popular comparisons of supercomputer speeds are

usually based on processing speed, the measurebeing “FLOPS,” or “Floating Point Operation PerSecond.” The term “floating point” refers to aparticular format for numbers within the computerthat is used for scientific calculation; and a floating

3T0 ~st~W1sh ~[w~n t.hls m~es[ level and the larger scale parallehsm found on some more experimental machines, some expetts refer tO thlirmted parallelism ~ ‘(multiprocxssmg. ”

Page 18: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

9

point “operation” refers to a single arithmetic step,such as adding two numbers, using the floating pointformat, Thus, FLOPS measure the speed of thearithmetic processor. Currently, the largest super-computers have processing speeds ranging up toseveral billion FLOPS.

However, pure processing speed is not by itself auseful measure of the relative power of computers.To see why, let’s look at an analogy.

In a supermarket checkout counter, the calcula-tion speed of the register does not, by itself,determine how fast customers can purchase theirgroceries and get out of the store. Rather, the speedof checkout is also affected by the rate at which eachpurchase can be entered into the register and theoverall time it takes to complete a transaction witha customer and start a new one. Of course, ulti-mately, the length of time the customer must wait inline to get to the clerk may be the biggest determi-nant of all.

Similarly, in a computer, how fast calculationscan be set up and presented to the processor and howfast new jobs and their associated data can be movedin, and completed work moved out of the computer,determines how much of the processor’s speed canactually be harnessed. (Some users refer to this as“solution speed.”) In a computer, those speeds aredetermined by a wide variety of hardware andsoftware characteristics. And, similar to the storecheckout, as a fast machine becomes busy, usersmay have to wait a significant time to get their turn.From a user’s perspective, then, a theoretically fastcomputer can look very slow.

In order to fully test a machine’s speed, expertsuse what are called “benchmark programs,” sampleprograms that reproduce the actual work load. Sinceworkloads vary, there are several different bench-mark programs, and they are constantly beingrefined and revised. Measuring a supercomputer’sspeed is, itself, a complex and important area ofresearch. It lends insight not only into what type ofcomputer currently on the market is best to use forparticular applications; but carefully structured meas-urements can also show where bottlenecks occurand, hence, where hardware and software improve-ments need to be made.

One can draw a few policy implications fromthese observations on speed:

Since overall speed improvement is closelylinked with how their machines are actuallyprograrnmed and used, computer designers arecritically dependent on feedback from that partof the user community which is pushing their

machines to the limit.There is no “fastest” machine. The speed of ahigh performance computer is too dependent onthe skill with which it is used and programmed,and the particular type of job it is being askedto perform.Until machines are available in the market andhave been tested for overall performance,policy makers should be skeptical of announce-ments based purely on processor speeds thatsome company or country is producing “fastermachines. ”Federal R&D programs for improving highperformance computing need to stress softwareand computational mathematics as well asresearch on machine architecture.

THE NATIONALSUPERCOMPUTER CENTERS

In February of 1985, NSF selected four sites toestablish national supercomputing centers: The Uni-versity of California at San Diego, The University ofIllinois at Urbana-Champaign, Cornell Universityand the John von Neumann Center in Princeton. Afifth site, Pittsburgh, was added in early 1986. Thefive NSF centers are described briefly below.

The Cornell Theory Center

The Cornell Theory Center is located on thecampus of Cornell University. Over 1,900 usersfrom 125 institutions access the center. AlthoughCornell does not have a center-oriented network, 55academic institutions are able to utilize the resourcesat Cornell through special nodes. A 14-memberCorporate Research Institute works within the centerin a variety of university-industry cost sharingprojects.

In November of 1985 Cornell received a 3084computer from IBM, which was upgraded to afour-processor 3090/400VF a year later. The 3090/400VF was replaced by a six-processor 3090/600E

Page 19: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

20

in May, 1987. In October, 1988 a second 3090/600Ewas added. The Cornell center also operates severalother smaller parallel systems, including an InteliPCS/2, a Transtech NT 1000, and a TopologixT1OOO. Some 50 percent of the resources of North-east Parallel Architecture Center, which include twoConnection machines, an Encore, and an AlliantFX/80, are accessed by the Cornell facility.

Until October of 1988, all IBM computers were“on loan” to Cornell for as long as Cornell retainedits NSF funding. The second IBM 3090/600, pro-cured in October, will be paid for by an NSF grant,Over the past 4 years, corporate support for theCornell facility accounted for 48 percent of theoperating costs. During those same years, NSF andNew York State accounted for 37 percent and 5percent respectively of the facility’s budget. Thisfunding has allowed the center to maintain a staff ofabout 100.

The National Center forSupercomputing Applications

The National Center for Supercomputing Appli-cations (NCSA) is operated by the University ofIllinois at Urbana-Champaign. The Center has over2,500 academic users from about 82 academicafiliates. Each affiliate receives a block grant oftime on the Cray X-MP/48, training for the Cray, andhelp using the network to access the Cray.

The NCSA received its Cray X-MP/24 in October1985, That machine was upgraded to a CrayX-MP/48 in 1987. In October 1988 a Cray-2s/4-128was installed, giving the center two Cray machines.This computer is the only Cray-2 now at an NSFnational center. The center also houses a ConnectionMachine 2, an Alliant FX/80 and FX/8, and over 30graphics workstations.

In addition to NSF funding, NCSA has solicitedindustrial support. Amoco, Eastman Kodak, EliLilly, FMC Corp., Dow Chemical, and Motorolahave each contributed around $3 million over a3-year period to the NCSA. In fiscal year 1989corporate support has amounted to 11 percent ofNCSA’s funding. About 32 percent of NCSA’sbudget came from NSF while the State of Illinoisand the University of Illinois accounted for theremaining 27 percent of the center’s $21.5 millionbudget. The center has a full-time staff of 198.

Pittsburgh Supercomputing Center

The Pittsburgh Supercomputing Center (PSC) isrun jointly by the University of Pittsburgh, Carnegie-Mellon University, and Westinghouse Electric Corp.More than 1,400 users from 44 States utilize thecenter. Twenty-seven universities are affiliated withPSC.

The center received a Cray X-MP/48 in March of1986. In December of 1988 PSC became the firstnon-Federal laboratory to possess a Cray Y-MP.Both machines were being used simultaneously fora short time, however the center has phased out theCray X-MP. The center’s graphics hardware in-cludes a Pixar image computer, an Ardent Titan, anda Silicon Graphics IRIS workstation.

The operating projection at PSC for fiscal year1990, a “typical year,” has NSF supporting 58percent of the center’s budget while industry andvendors account for 22 percent of the costs. TheCommonwealth of Pennsylvania and the NationalInstitutes of Health both support PSC, accountingfor 8 percent and 4 percent of budget respectively.Excluding working students, the center has a staff ofaround 65.

San Diego Supercomputer Center

The San Diego Supercomputer Center (SDSC) islocated on the campus of the University of Califor-nia at San Diego and is operated by GeneralAtomics. SDSC is linked to 25 consortium membersbut has a user base in 44 States. At the end of 1988,over 2,700 users were accessing the center. SDSChas 48 industrial partners who use the facility’shardware, software, and support staff.

A Cray X-MP/48 was installed in December,1985. SDSC’s first upgrade, a Y-MP8/864, isplanned for December, 1989. In addition to the Cray,SDSC has 5 Sun workstations, two IRIS worksta-tions, an Evans and Sutherland terminal, 5 Apolloworkstations, a Pixar, an Ardent Titan, an SCS-40minisupercomputer, a Supertek S-1 minisupercom-puter, and two Symbolics Machines.

The University of California at San Diego spendsmore than $250,000 a year on utilities and servicesfor SDSC. For fiscal year 1990 the SDSC believesNSF will account for 47 percent of the center’soperating budget. The State of California currently

Page 20: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

provides $1.25 million per year to the center and in1988, approved funding of $6 million over 3 years toSDSC for research in scientific visualization. Forfiscal year 1990 the State is projected to support 10percent of the center’s costs. Industrial support,which has given the center $12.6 million in dona-tions and in-kind services, is projected to provide 15percent of the total costs of SDSC in fiscal year1990.

John von NeumannNational Supercomputer Center

The John von Neumann National SupercomputerCenter (JvNC), located in Princeton New Jersey, ismanaged by the Consortium for Scientific Comput-ing Inc., an organization of 13 institutions from NewJersey, Pennsylvania, Massachusetts, New York,Rhode Island, Colorado, and Arizona. Currentlythere are over 1,400 researchers from 100 institutesaccessing the center. Eight industrial corporationsutilize the JvNC facilities.

At present there are two Cyber 205 and twoETA-1 OS, in use at the JvNC. The first ETA-10 wasinstalled, after a 1-year delay, in March of 1988. Inaddition to these machines there is a Pixar H, twoSilicon Graphics IRIS and video animation capabili-ties.

When the center was established in 1985 by NSF,the New Jersey Commission on Science and Tech-nology committed $12.1 million to the center over a5-year period. An addition $13.1 million has beenset-aside for the center by the New Jersey Commis-sion for fiscal year 1991-1995. Direct funding fromthe State of New Jersey and university sourcesconstitutes 15 percent of the center’s budget forfiscal year 1991-1995. NSF will account for 60percent of the budget. Projected industry revenueand cost sharing account for 25 percent of costs.Since the announcement by CDC to close its ETAsubsidiary, the future of JvNC is uncertain. Planshave been proposed to NSF by JvNC to purchase aCray Research Y-MP, eventually upgrading to aC-90. NSF is reviewing the plan and a decision onrenewal is expected in October of 1989,

OTHER HPC FACILITIESBefore 1984 only three universities operated

supercomputers: Purdue University, the Universityof Minnesota, and Colorado State University. TheNSF supercomputing initiative established five newsupercomputer centers that were nationally accessi-ble. States and universities began funding their ownsupercomputer centers, both in response to growingneeds on campus and to increased feeling on the partof State leaders that supercomputer facilities couldbe important stimuli to local R&D and, therefore, toeconomic development. Now, many State and uni-versity centers offer access to high performancecomputers;4 and the NSF centers are only part of amuch larger HPC environment including nearly 70Federal installations (see table 2-l).

Supercomputer center operators perceive theirroles in different ways. Some want to be a proactiveforce in the research community, leading the way byhelping develop new applications, training users,and so on. Others are content to follow in the paththat the NSF National Centers create. These differ-ences in goals/missions lead to varied services andcomputer systems. Some centers are “cycle shops,”offering computing time but minimal support staff.Other centers maintain a large support staff and offerconsulting, training sessions, and even assistancewith software development. Four representativecenters are described below:

Minnesota Supercomputer Center

The Minnesota Supercomputer Center, originallypart of the University of Minnesota, is a for-profitcomputer center owned by the University of Minne-sota. Currently, several thousand researchers use thecenter, over 700 of which are from the University ofMinnesota. The Minnesota Supercomputing Insti-tute, an academic unit of the University, channelsuniversity usage by providing grants to the studentsthrough a peer review process.

The Minnesota Supercomputer Center receivedits first machine, a Cray 1A, in September, 1981. Inmid 1985, it installed a Cyber 205; and in the latterpart of that year, two Cray 2 computers wereinstalled within 3 months of each other. Minnesota

4The nm~r cmot ~ estfia[~ ex~dy. First, it depends on the dcfiniuon of supercomputer one uses. Secondly, the number kwps Chmghg asStates announce new plans for centers and as large research universities purchase their own HPCS.

Page 21: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

12

Table 2-l—Federal UnctassifiedSupercomputer lnstallations

NumberLaboratory of machines

Department of EnergyLos Alams National Lab . . . . . . . . . . . . . . . . . . . .Livermore National Lab, NMFECC . . . . . . . . . . . .Livermore National Lab . . . . . . . . . . . . . . . . . . . . .Sandia National Lab, Livermore. . . . . . . . . . . . . . .Sandia National Lab, Albuquerque . . . . . . . . . . . .Oak Ridge National Lab . . . . . . . . . . . . . . . . . . . . .idaho Falls National Engineering . . . . . . . . . . . . . .Argonne National Lab . . . . . . . . . . . . . . . . . . . . . . .Knolls Atomic Power Lab . . . . . . . . . . . . . . . . . . . .Bettis Atomic Power Lab . . . . . . . . . . . . . . . . . . . .Savannah/DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . .Richland/DOE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Schenectady Naval Reactors/DOE . . . . . . . . . . . .Pittsburgh Naval Reactors/DOE . . . . . . . . . . . . . . .

Department of DefenseNaval Research Lab . . . . . . . . . . . . . . . . . . . . . . . .Naval Ship R&D Center ., . . . . . . . . . . . . . . . . . . .Fleet Numerical Oceanography . . . . . . . . . . . . . . .Naval Underwater System Command . . . . . . . . . .Naval Weapons Center . . . . . . . . . . . . . . . . . . . . . .Martin Marietta/NTB . . . . . . . . . . . . . . . . . . . . . . . . .Air Force Wapons Lab . . . . . . . . . . . . . . . . . . . . .Air Force Global Weather . . . . . . . . . . . . . . . . . . . .Arnold Engineering and Development . . . . . . . . . .Wright Patterson AFB . . . . . . . . . . . . . . . . . . . . . . .Aerospace Corp. . . . . . . . . . . . . . . . . . . . . . . . . . .Army Ballistic Research Lab . . . . . . . . . . . . . . . . . .Army/Tacom ... , , . . . . . . . . . . . . . . . . . . . . . . . . . .Army/Huntsville. . . . . . . . . . . . . . . . . . . . . . . . . . . .Army/Kwajaiein . . . . . . . . . . . . . . . . . . . . . . . . . . . .Army/WES (on order) . . . . . . . . . . . . . . . . . . . . . . .Army/Warren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Defense Nuclear Agency . . . . . . . . . . . . . . . . . . . .

NASAAmes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Goddard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Lewis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Langley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Marshal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Department CommerceNational inst. of Standards and Technology . . . . .National Oceanic & Atmospheric Administration .

Environmental Protection AgencyRaleigh, North Carolina . . . . . . . . . . . . . . . . . . . . .

Eeportment and Human ServicesNational Institutes of Health . . . . . . . . . . . . . . . . . .National Cancer institute . . . . . . . . . . . . . . . . . . . .

647321111111

2

111111211112111111

52111

14

1

11

SOURCE: Offics of Technology Assessment estimate.

bought its third Cray 2, the only one in use now,atthe end of 1988, just after it installed its ETA-10.The ETA-10 has recently been decommissioned dueto the closure of ETA. A Cray X-MP has been added,giving them a total of two supercomputers. TheMinnesota Supemomputer Centerhas acquired more

supercomputers than anyone outside the FederalGovernment.

The Minnesota State Legislature provides fundsto the University for the purchasing of supercom-puter time. Although the University buys a substan-tial portion of supercomputing time, the center hasmany industrial clients whose identities are proprie-tary, but they include representatives of the auto,aerospace, petroleum, and electronic industries.They are charged a fee for the use of the facility.

The Ohio Supercomputer Center

The Ohio Supercomputer Center (OSC) orig-inated from a coalition of scientists in the State. Thecenter, located on Ohio State University’s campus,is connected to 20 other Ohio universities via theOhio Academic Research Network (OARNET). Asof January 1989, three private firms were using theCenter’s resources.

In August, 1987, OSC installed a Cray X-MP/24,which was upgraded to an Cray X-MP/28 a yearlater. The center replaced the X-MP in August 1989with a Cray Research Y-MP. In addition to Crayhardware, there are 40 Sun Graphic workstations, aPixar II, a Stallar Graphics machine, a SiliconGraphic workstation and a Abekas Still Storemachine. The Center maintains a staff of about 35people.

The Ohio General Assembly began funding thecenter in the summer of 1987, appropriating $7.5million. In March of 1988, the Assembly allocated$22 million for the acquisition of a Cray Y-MP. OhioState University has pledged $8.2 million to aug-ment the center’s budget. As of February 1989 theState has spent $37.7 million in funding.5 OSC’sannual budget is around $6 million (not includingthe purchase/leasing of their Cray).

Center for High Performance Computing,Texas (CHPC)

The Center for High Performance Computing islocated at The University of Texas at Austin. CHPCserves all 14 institutions, 8 academic institutions,and 6 health-related organizations, in the Universityof Texas System.

5J~ WUC, “ohio~: Blazing Computer,” Ohio, February 1989, p. 12.

Page 22: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

— —

The University of Texas installed a Cray X-MP/24 in March 1986, and a Cray 14se in November of1988. The X-MP is used primarily for research. Forthe time being, the Cray 14se is being used as avehicle for the conversion of users to the Unixsystem. About 40 people staff the center.

Original funding for the center and the CrayX-NIP came from bonds and endowments from bothThe University of Texas system and The Universityof Texas at Austin. The annual budget of CHPC isabout $3 million. About 95 percent of the center’soperating budget comes from State funding andendowments. Five percent of the costs are recoveredfrom selling CPU time.

Alabama Supercomputer Network

The George C. Wallace Supercomputer Center,located in Huntsville Alabama, serves the needs ofresearchers throughout Alabama. Through the Ala-bama Supercomputer Network, 13 Alabama institu-tions, university and government sites, are con-nected to the center. Under contract to the State,Boeing Computer Services provides the supportstaff and technical skills to operate the center.Support staff are located at each of the nodes to helpfacilitate the use of the supercomputer from remotesites.

A Cray X-MP/24 arrived in 1987 and becameoperational in early 1988. In 1987 the State ofAlabama agreed to finance the center. The Stateallocated $2.2 million for the center and $38 millionto Boeing Services for the initial 5 years. Theaverage yearly budget is $7 million. The center hasa support staff of about 25.

Alabama universities are guaranteed 60 percent ofthe available time at no cost while commercialresearchers are charged a user fee. The impetus forthe State to create a supercomputer center has beenstated as the technical superiority a supercomputerwould bring, which would draw high-tech industryto the State, enhance interaction between industryand the universities, and promote research and theassociated educational programs within the univer-sity.

Commercial Labs

A few corporations, such as the Boeing ComputerCorp., have been selling high performance computertime for a while. Boeing operates a Cray X-MP/24.Other commercial sellers of high performance com-puting time include the Houston Area ResearchCenter (HARC). HARC operates the only JapaneseSupercomputer in America, the NEC SX2. Thecenter offers remote services.

Computer Sciences Corp. (CSC), located in FallsChurch, Virginia, has a 16-processor FLEX/32 fromFlexible Computer Corp., a Convex 120 fromConvex Computer Corp, and a DAP21O from ActiveMemory Technology. Federal agencies comprisetwo-thirds of CSC’s customers.6 Power ComputingCo., located in Dallas, Texas, offers time on a CrayX-MP/24. Situated in Houston, Texas, Supercom-puting Technology sells time on its Cray X-MP/28.Opticom Corp., of San Jose, California, offers timeon a Cray X-MP/24, Cray l-M, Convex C220, andcl XP.

Federal Centers

In an informal poll of Federal agencies, OTAidentified 70 unclassified installations that operatesupercomputers, confirming the commonly expressedview that the Federal Government still represents amajor part of the market for HPC in the United States(see figure 2-l). Many of these centers serve theresearch needs of government scientists and engi-neers and are, thus, part of the total researchcomputing environment. Some are available tonon-Federal scientists, others are closed.

CHANGING ENVIRONMENTThe scientific computing environment has

changed in important ways during the few years thatNSF’s Advanced Scientific Computing Programshave existed. Some of these changes are as follows:

The ASC programs, themselves, have notevolved as originally planned. The original NSFplanning document for the ASC program originallyproposed to establish 10 supercomputer centers overa 3-year period; only 5 were funded. Center manag-ers have also expressed the strong opinion that NSFhas not met many of its original commitments for

6N~s p~kcr Smiti, “More Than Just Buying Cycles,” Supercompufer Review, April 1989.

Page 23: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

14

Figure 2-1—Distribution of Federal Suparcomputers development of the Cray 3, a machine based onSupercomputers gallium arsenide electronics,

4 0

3 5

3 0

2 5

2 0

15

10

5

0

33

19

10

52 1

DOE DoD NASA Commerce HSS EPA

Agencies

SOURCE: office of Technology Assessment, 1989.

funding in successive years of the contracts, forcingthe centers to change their operational priorities andsearch for support in other directions.

Technology has changed. There has been a burstof innovation in the HPC industry. At the top of theline, Cray Research developed two lines of ma-chines, the Cray 2 and the Cray X-MP (and itssuccessor, the Y-MP) that are much more powerfulthan the Cray 1, which was considered the leadingedge of supercomputing for several years by themid- 1980s. IBM has delivered several 3090s equippedwith multiple vector processors and has also becomea partner in a project to develop a new supercom-puter in a joint venture with SSI, a firm started bySteve Chen, a noted supercomputer architect previ-ously with Cray Research.

More recently, major changes have occurred inthe industry. Control Data has closed down ETA, itssupercomputer operation. Cray Research has beenbroken into two parts-Cray Computer Corp. andCray Research. Each will develop and market adifferent line of supercomputers. Cray Researchwill, initially, at least, concentrate on the Y-MPmodels, the upcoming C-90 machines, and theirlonger term successors. Cray Computer Corp., underthe leadership of Seymour Cray, will concentrate on

At the middle and lower end, the HPC industryhas introduced several new so-called “mini-supercomputers"-many of them based on radicallydifferent system concepts, such as massive parallel-ism, and many designed for specific applications,such as high-speed graphics. New chips promisevery high-speed desktop workstations in the nearfuture.

Finally, three Japanese manufacturers, NEC, Fujitsu,and Hitachi have been successfully building andmarketing supercomputers that are reportedly com-petitive in performance with U.S. machines.7 Whilethese machines have, as yet, not penetrated the U.S.computer market, they indicate the potential com-petitiveness of the Japanese computer industry in theinternational HPC markets, and raise questions forU.S. policy.

Many universities and State systems haveestablished "supercornputer centers” to serve theneeds of their researchers.8 Many of these centershave only recently been formed, some have not yetinstalled their systems, so their operational experi-ence is, at best, limited to date. Furthermore, someother centers operate systems that, while verypowerful scientific machines, are not considered byall experts to be supercomputers. Nevertheless, thesecenters provide high performance scientific comput-ing to the research community, and create newdemands for Federal support for computer time.

Individual scientist and research teams are alsogetting Federal and private support from theirsponsors to buy their own “minisupercomputers.” Insome cases, these systems are used to develop andcheck out software eventually destined to run onlarger machines; in other cases, researchers seem tofind these machines adequate for their needs. Ineither mode of use, these departmental or laboratorysystems expand the range of possible sourcesresearchers turn to for high performance com-puting. Soon, desktop workstations will have per-formance equivalent to that of supercomputers of adecade ago at a significantly lower cost.

7Si=, ~ *OW ~ve, com~g ~e ~wer ~d ~rfo~mce of suwrcomputers is a complex and arcane field, OTA will refrti from ~mp~gor ranking systems in any absolute sense.

8sw N~o~ As~latjon of Smte Unlvemities ~d L~d.Gr~[ Co]]eges, SWerc~~ufi”ngj_~r t~ /W()’~: A Stied Responsibility (Washington,DC: January 1989).

Page 24: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Finally, some important changes have oc-curred in national objectives or perceptions ofissues. For example, the development of a very highcapacity national science network (or “internet”) hastaken on a much greater significance. Originallyconceived of in the narrow context of tying togethersupercomputer centers and providing regional ac-cess to them, the science network has now come tobe thought of by its proponents as a basic infrastruc-ture, potentially extending throughout (and, perhaps,even beyond) the entire scientific, technical, andeducational community.

Science policy is also changing, as new importantand costly projects have been started or are beingseriously considered, Projects such as the supercol -lider, the space station, NASA’s Earth ObservingSystem (EOS) program, and the human genomemapping may seem at first glance to compete forfunding with science networks and supercomputers.However, they will create formidable new demandsfor computation, data communications, and datastorage facilities; and, hence, constitute additionalarguments for investments in an information tech-nology infrastructure.

Finally, some of the research areas in the so-called“Grand Challenges"9 have attained even greatersocial importance-such as fluid flow modelingwhich will help the design of faster and more fuelefficient planes and ships, climate modeling to helpunderstand long term weather patterns, and thestructural analysis of proteins to help understanddiseases and design vaccines and drugs to fightthem.

REVIEW AND RENEWALOF THE NSF CENTERS

Based on the recent review, NSF has concludedthat the centers, by and large, have been successfuland are operating smoothly. That is, their systemsare being fully used, they have trained many newusers, and they are producing good science. In lightof that conclusion, NSF has tentatively agreed torenewal for the three Cray-based centers and theIBM-based Cornell Center. The John von NeumannCenter in Princeton has been based on ETA-10computers. Since ETA was closed down, NSF put

the review of the JvNC on hold pending review of arevised plan that has now been submitted, A decisionis expected soon.

Due to the environmental changes noted above,if the centers are to continue in their presentstatus as special NSF-sponsored facilities, theNational Supercomputer Centers will need tosharply define their roles in terms of: 1) the usersthey intend to serve, 2) the types of applicationsthey serve, and 3) the appropriate balance be-tween service, education, and research.

The NSF centers are only a few of a growingnumber of facilities that provide access to HPCresources. Assuming that NSF’s basic objective is toassure researchers access to the most appropriatecomputing for their work, it will be under increasingpressure to justify dedicating funds to one limitedgroup of facilities. Five years ago, few U.S. aca-demic supercomputer centers existed. When scien-tific demand was less, managerial attention wasfocused on the immediate problem of getting equip-ment installed and of developing an experienceduser community. Under those circumstances, someambiguity of purpose may have been acceptable andunderstandable. However, in light of the prolifera-tion of alternative technologies and centers, as wellas growing demand by researchers, unless thepurposes of the National Centers are more clearlydelineated, the facilities are at risk of being asked toserve too many roles and, as a result, serving nonewell.

Some examples of possible choices are as fol-lows:

L Provide Access to HPC●

Provide access to the most powerful, leadingedge, supercomputers available,Serve the HPC requirements for research pro-jects of critical importance to the FederalGovernment, for example, the “Grand Chal-lenge” topics.Serve the needs of all NSF-funded researchersfor HPC.Serve the needs of the (academic, educational,and/or industrial) scientific community forHPC.

%’(j~and ~~enge’) ~=ach toplc~ ~ que~iom Of major ~i~ lm~~~e mat rquwe for progress subs~[ially grea~r cOmputklg reSOUCeS dltularc currently available. The term was fust corned by Nobel Laureate physlclst, Kenneth Wikm.

Page 25: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

16

2. Educate and Train

. Provide facilities and programs to teach scien-tists and students how to use high performancecomputing in their research.

3. Advance the State of HPC Use in Research

. Develop applications and system software.

. Serve as centers for research in computationalscience.

. Work with vendors as test sites for advancedHPC systems.

As the use of HPC expands into more fields andamong more researchers, what are the policies forproviding access to the necessary computing re-sources? The Federal Government needs to de-velop a comprehensive analysis of the require-ments of the scientific researchers for highperformance computing, Federal policies of sup-port for scientific computing, and the variety ofFederal and State/private computing facilitiesavailable for research.

We expect that OTA’s final report will contributeto this analysis from a congressional perspective.However, the executive branch, including both leadagencies and OSTP also need to participate activelyin this policy and planning process.

THE INTERNATIONALENVIRONMENT

Since some of the policy debate over HPCs hasinvolved comparison with foreign programs, thissection will conclude a brief description of the statusof HPC in some other nations.

Japan

The Ministry of International Trade and Industry(MITI), in October of 1981, announced the undertak-ing of two computing projects, one on artificialintelligence, the Fifth Generation Computer Project,and one on supercomputing, the National Super-speed Computer Project. The publicity surroundingMITI’s announcement focused on fifth generationcomputers, but brought the more general subject ofsupercomputing to the public attention. (The term

“Fifth Generation” refers to computers speciallydesigned for artificial intelligence applications, es-pecially those that involve logical inference or“reasoning.”)

Although in the eyes of many scientists the FifthGeneration project has fallen short of its originalgoals, eight years later it has produced someaccomplishments in hardware architecture and arti-ficial intelligence software. MITI’s second project,dealing with supercomputers, has been more suc-cessful. Since 1981, when no supercomputers weremanufactured by the Japanese, three companieshave designed and produced supercomputers.

The Japanese manufacturers followed the Americ-ans into the supercomputer market, yet in the shorttime since their entrance, late 1983 for Hitachi andFujitsu, they have rapidly gained ground in HPChardware. One company, NEC, has recently an-nounced a supercomputer with processor speeds upto eight times faster than the present fastest Ameri-can machine.10 Outside of the United States, Japanis the single biggest market for and supplier ofsupercomputers, although American supercomputercompanies account for less than one-fifth of allsupercomputers sold in Japan. 11

In the present generation of supercomputers, U.S.supercomputers have some advantages. One ofAmerican manufacturer’s major advantages is theavailability of scientific applications software. TheJapanese lag behind the Americans in softwaredevelopment, although resources are being devotedto research in software by the Japanese manufactur-ers and government and there is no reason to thinkthey will not be successful.

Another area in which American firms differ fromthe Japanese has been in their use of multiprocessorarchitecture (although this picture is now changing).For several years, American supercomputer compa-nies have been designing machines with multi-processors to obtain speed. The only Japanesesupercomputer that utilizes multiprocessors is theNEC system, which will not be available until thefall of 1990.

l~e NEC m~hine is not ~hed~~ for delive~ until 1990, at which time faster Cray computers may well be on the market also. s= ~so ~comments above about computer speed<

Page 26: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

American firms have been active in the Japanesemarket, with mixed success.

Since 1979 Cray has sold 16 machines in Japan.Of the 16 machines, 6 went to automobile manufac-turers, 2 to NTT, 2 to Recruit, 1 to MITI, 1 toToshiba, 1 to Aichi Institute of Technology, and 1 toMitsubishi Electric. None have gone to publicuniversities or to government agencies.

IBM offers their 3090 with attached vectorfacilities, IBM does not make public its customers,but report that they have sold around 70 vectorprocessor computers to Japanese clients. Someowners, or soon to be owners, include Nissan, NTT,Mazda, Waseda University, Nippon Steel and Mis-tubishi Electric.

ETA sold two supercomputers in Japan. The firstwas to the Tokyo Institute of Technology (TIT). Thesale was important because it was the first sale of aCDC/ETA supercomputer to the Japanese as well asthe first purchase of an American supercomputer bya Japanese national university. This machine wasdelivered late (it arrived in May of 1988) and hadmany operating problems, partially due to its beingthe first installment of an eight-processor ETA 1O-E.The second machine was purchased (not delivered)on February 9, 1989 by the University of Meiji. HowCDC will deal with the ETA 10 at TIT in light of theclosure of ETA is unknown at this time.

Hitachi, Fujitsu, and NEC, the three Japanesemanufacturers of supercomputers, are among thelargest computer/electronic companies in Japan; andthey produce their own semiconductors. Their sizeallows them to absorb the high initial costs ofdesigning a new supercomputer, as well as providelarge discounts to customers. Japan’s technologicallead is in its very fast single-vector processors. Littleis known, as of yet, what is happening with parallelprocessing in Japan, although NEC’s recent productannouncement for the SX-X states that the machinewill have multiprocessors.

Hitachi’s supercomputer architecture is looselybased on its IBM compatible mainframe. Hitachientered the market in November of 1983. Unliketheir domestic rivals, Hitachi has not entered theinternational market. All 29 of its ordered/installedsupercomputers are located in Japan.

NEC’s current supercomputer architecture is notbased on its mainframe computer and it is not IBMcompatible. They entered the supercomputer marketlater than Hitachi and Fujitsu. Three NEC supercom-puters have been sold/installed in foreign markets:one in the United States, an SX-2 machine at theHouston Area Research Consortium, one at theLaboratory of Aerospace Research in Netherlands,and an SX-1 has recently been sold in Singapore.Their domestic users include five universities.

On April 10, 1989, in a joint venture withHoneywell Inc., NEC announced a new line ofsupercomputers, the SX-X. The most powerfulmachine is reported to be up to eight times fasterthan the Cray X-MP machine. The SX-X reportedlywill run Unix-based software and will have multi-processors. This machine is due to be shipped in thefall of 1990.

Fujitsu’s supercomputer, like Hitachi’s, is basedon their IBM compatible mainframes. Their firstmachine was delivered in late 1983. Fujitsu had sold80 supercomputers in Japan by rnid-1989. Anestimated 17 machines have been sold to foreigncustomers. An Amdahl VP-200 is used at theWestern Geophysical Institute in London. In theUnited States, the Norwegian company GECO,located in Houston, has a VP-200 and two VP-1OOs.The most recent sale was to the Australian NationalUniversity, a VP-1OO.

Europe

European countries that have (or have ordered)supercomputers include: West Germany, France,England, Denmark, Spain, Norway, the Netherlands,Italy, Finland, Switzerland, and Belgium. Europe iscatching up quickly with America and Japan inunderstanding the importance of high performancecomputing for science and industry. The computerindustry is helping to stimulate European interest.For example, IBM has pledged $40 million towardsa supercomputer initiative in Europe over the 2-yearperiod between 1987-89. It is creating a large base offollowers in the European academic community byparticipating in such programs as the EuropeanAcademic Supercomputing Initiative (EASI), andthe Numerically Intensive Computing Enterprise(NICE). Cray Research also has a solid base in

Page 27: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

18

academic Europe, supplying over 14 supermomput-ers to European universities.

The United Kingdom began implementing a highperformance computing plan in 1985. The JointWorking Party on Advanced Research Computing’sreport in June of 1985, “Future Facilities forAdvanced Research Computing,” recommended anational facility for advanced research computing.This center would have the most powerful super-computer available; upgrade the United Kingdom’snetworking systems, JANET, to ensure communicat-ions to remote users; and house a national organiza-tion of advanced research computing to promotecollaboration with foreign countries and withinindustry, ensuring the effective use of these re-sources. 12 Following this report, a Cray XMP/48was installed at the Atlas Computer Center inRutherford. A Cray 1s was installed at the Universityof London. Between 1986 and 1989, some $11.5million was spent on upgrading and enhancingJANET 13

Alvey was the United Kingdom’s key informationtechnology R&D program. The program promotedprojects in information technology undertaken jointlyby industry and academics. The United Kingdombegan funding the Alvey program in 1983. Duringthe first 5 years, 350 million pounds were allocatedto the Alvey program. The program was eliminatedat the end of 1988. Some research was picked up byother agencies, and many of the projects that weresponsored by Alvey are now submitting proposals toEsprit (see below).

The European Community began funding theEuropean Strategic Programme for Research inInformation Technology (Esprit) program in 1984partly as a reaction to the poor performance of theEuropean Economic Community in the market ofinformation technology and partly as a response toMITI’s 1981 computer programs. The program,funded by the European Community (EC), intends to“provide the European IT industry with the key

components of technology it needs to be competitiveon the world markets within a decade. ”14 The EC hasdesigned a program that forces collaboration be-tween nations, develops recognizable standards inthe information technology industry, and promotespre-competitive R&D. The R&D focuses on fivemain areas: microelectronics, software develop-ment, office systems, computer integrated manufac-turing, and advanced information.

Phase I of Esprit, the first 5 years, received $3.88billion in funding. 15 The finding was split 50-50 by

the EC and its participants, This was considered thecatch-up phase. Emphasis was placed on basicresearch, realizing that marketable goods will fol-low. Many of the companies that participated inPhase I were small experimental companies.

Phase II, which begins in late 1989, is calledcommercialization. Marketable goods will be themajor emphasis of Phase II. This implies that thelarger firms will be the main industrial participantssince they have the capital needed to put a producton the market. The amount of funds for Phase II willbe determined by the world environment in informa-tion technology and the results of Phase I, but hasbeen estimated at around $4.14 billion. l6

Almost all of the high performance computertechnologies emerging from Europe have beenbased on massively parallel architectures. Some ofEurope’s parallel machines incorporate the transputer.Transputer technology (basically a computer on achip) is based on high density VLSI (very large-scaleintegration) chips. The T800, Inmos’s transputer,has the same power as Intel’s 80386/80387 chip, thedifference being in size and price. The transputer isabout one-third the size and price of Intel’s chip.17

The transputer, created by the Inmos company, hadits initial R&D funded by the British government.Eventually Thorn EMI bought Inmos and the rightsto the transputer. Thorn EMI recently sold Inmos toa French-Italian joint venture company, SGS-Thomson, just as it was beginning to be profitable.

IZ’’FUtUre F~ilities for A&anced Research Computing,” the report of a Joint Working Party on Advanced Research Computing, United Kingdom,Juty 1985.

13D&l&on p-r on %upereomputers in Australia,” Department of Industry, Ikchnoloy and Commerce, April 1988, pp. 14-15.

l’W3spnt,” commission of the European Communities, p. 5.Is’’Esprit,” Commission of tic European Cotnmunitics, p. 21.Ibsimm peq, C’fiWan marn Effort Breaks Ground in Software Standards,” Electronic Business, Aug. 15, 1988, pp. 90-91.17Graham K. EMS, ‘~r~wuters Advance Parallel Processing,” Research and Devefopmenr, March 1989, p. 50.

Page 28: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

19

Some of the more notable high performance com-puter products and R&D in Europe include:

T.Node, formerly called Supernode P1085, isone of the more successful endeavors of theEsprit program. T.Node is a massively parallelmachine that exploits the Inmos T800 transputer.A single node is composed of 16 transputersconnected by two NEC VLSI chips and twoadditional transputers. The participants in theproject are The University of Southampton,Royal Signals, Radar Establishment, Thom-EMI (all British) and the French firm Telemat.The prototype of the French T. Node, Marie, amassively parallel MIMD (multiple instruc-tion, multiple data) computer, was delivered inApril of 1988. The product is now beingmarketed in America.Project 415 is also funded by Esprit. Its projectleader is Philips, the Dutch electronics group.This project, which consists of six groups,focuses on symbolic computation, artificialintelligence (AI), rather than “number crunch-ing” (mathematical operations by conventionalsupercomputers). Using parallel architecture,the project is developing operating systems andlanguages that they hope will be available in 5years for the office environment.18

The Flagship project, originally sponsored bythe Alvey program, has created a prototypeparallel machine using 15 processors. Its origi-nal participants were ICL, Imperial College,and the University of Manchester. Other Alveyprojects worked with the Flagship project indesigning operating systems and languages forthe computer. By 1992 the project hopes tohave a marketable product. Since cancellationof the Alvey program, Flagship has gainedsponsorship from the Esprit Program.The Supernum Project of West Germany,with the help of the French Isis program,currently is creating machinery with massivelyparallel architecture. The parallelism, based onIntel’s 80386 microprocessors, is one of Es-prit’s more controversial and ambitious pro-jects. Originally the project was sponsored by

the West German government in their super-computing program. A computer prototype wasrecently shown at the industry fair in Hanover.It will be marketed in Germany by the end ofthe year for around $14 million.The Supercluster, produced and manufacturedby Parsytec GmbH, a small private company,exemplifies Silicon Valley initiative occurringin West Germany. Parsytec has received somefinancial backing from the West German gov-ernment for their venture. This start-up firmsells a massively parallel machine that rivalssuperminicomputers or low-end supercomput-ers. The Supercluster architecture exploits the32-bit transputer from Inmos, the T800. Sixteentransputer-based processors in clusters of fourare linked together. This architecture is lesscostly than conventional machines, costingbetween $230,000 and $320,000.19 Parsytechas just begun to market its product in America.

Other Nations

The Australia National University recently pur-chased a Fujitsu VP-1OO. A private service bureau inAustralia, Leading Edge, possesses a Cray Researchcomputer. At least two sites in India have supercom-puters, one at the Indian Meteorological Centre andone at ISC University. Two Middle Eastern petro-leum companies house supercomputers, and Koreaand Singapore both have research institutes withsupercomputers.

Over half a dozen Canadian universities have highperformance computers from CDC, Cray Research,or IBM. Canada’s private sector has also invested insupercomputers. Around 10 firms possess highperformance computers. The Alberta government,aside from purchasing a supercomputer and support-ing associated services, has helped finance MyriasComputer Corp. A wholly owned U.S. subsidiary,Myrias Research Corp. manufactures the SP-2, aminisupercomputer.

One newly industrialized country is reported to bedeveloping a minisupercomputer of its own. The

IBJ~a VOWk, s~rc~tig Review, “European Transputer-based Projects @ ChtdlmW to U.S. Su_Ptig SUWXY,” Nov~~_~1988, pp. 8-9.

lgJotIII Go*, “A New Transputer Design From West German Startup,” Electrom”cs, Mu. 3, 1988, PP. 71-72.

Page 29: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

20

first Brazilian rninisupercomputer, claimed to be machine will sell for $2.5 million. The Fundingcapable of 150 mips, is planned to be available by the Authority of Studies and Projects (FINEP) financedend of 1989. The prototype is a parallel machine the project, with annual investment around $1with 64 processors, each with 32-bit capacity. The million.

Page 30: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

Chapter 3

Networks

Information is the lifeblood of science; commu-nication of that information is crucial to theadvance of research and its applications. Datacommunication networks enable scientists to talkwith each other, access unique experimental data,share results and publications, and run modelson remote supercomputers, all with a speed,capacity, and ease that makes possible the posingof new questions and the prospect for newanswers. Networks ease research collaborationby removing geographic barriers. They havebecome an invaluable research tool, opening upnew channels of communication and increasingaccess to research equipment and facilities. Mostimportant networking is becoming the indispen-sable foundation for all other use of informationtechnology in research.

Research networking is also pushing the frontiersof data communications and network technologies.Like electric power, highways, and the telephone,data communications is an infrastructure that will becrucial to all sectors of the economy. Businessesdemand on-line transaction processing, and finan-cial markets run on globally networked electronictrading. The evolution of telephony to digitaltechnology allows merging of voice, data, andinformation services networking, although voicecircuits still dominate the deployment of the technol-ogy. Promoting scientific research networking—dealing with data-intense outputs like satellite imag-ing and supercomputer modeling—should pushnetworking technology that will find application faroutside of science.

Policy action is needed, if Congress wishes tosee the evolution of a full-scale national researchand education network. The existing “internet”of scientific networks is a fledgling. As thisconglomeration of networks evolves from anR&D enterprise to an operational network, userswill demand round-the-clock, high-quality serv-ice. Academics, policy makers, and researchersaround the world agree on the pressing need totransform it into a permanent infrastructure.This will entail grappling with difficult issues ofpublic and private roles in funding, management,pricing/cost recovery, access, security, and inter-national coordination as well as assuring ade-

quate funding to carry out initiatives that are setby Congress.

Research networking faces two particular policycomplications. First, since the network in its broad-est form serves most disciplines, agencies, and manydifferent groups of users, it has no obvious leadchampion. As a common resource, its potentialsponsors may each be pleased to use it but unlikelyto give it the priority and funding required to bringit to its full potential. There is a need for clear centralleadership, as well as coordination of governments,the private sector, and universities. A second com-plication is a mismatch between the concept of anational research network and the traditionallydecentralized, subsidized, mixed public-private na-ture of higher education and science. The processesand priorities of mission agency-based Federalsupport may need some redesigning, as they areoriented towards supporting ongoing mission-oriented and basic research, and may work less wellat fostering large-scale scientific facilities and infra-structuremissions.

In thegetting ain place.

that cut across disciplines and agency

near term, the most important step iswidely connected, operational networkBut the “bare bones” networks are a

small part of the picture. Information that flowsover the network, and the scientific resources anddata available through the network, are theimportant payoffs. Key long-term issues for theresearch community will be those that affect thesort of information available over the network,who has access to it, and how much it costs. Themain issue areas for scientific data networking areoutlined below:

research-to develop the technology requiredto transmit and switch data at very high rates;private sector participation-role of the com-mon carriers and telecommunication compa-nies in developing and managing the networkand of private information firms in offeringservices;scope—who the network is designed to servewill drive its structure and management;access—balancing open use against securityand information control and determining who

–21 –

Page 31: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

22

will be able to gain access to the network forwhat purpose;standards-the role of government, industry,users, and international organizations in settingand maintaining technical standards;management-public and private roles; degreeof decentralization;funding-an operational network will requiresignificant, stable, continuing investment; thefinancial responsibilities demarcated must re-flect the interests of various players, fromindividual colleges through States and theFederal Government, in their stake in networkoperations and policies;economics-pricing and cost recovery for net-work use, central to the evolution and manage-ment of any infrastructure. Economics willdrive the use of the network;information services-who will decide whattypes of services are to be allowed over thenetwork, who is allowed to offer them; and whowill resolve information issues such as privacy,intellectual property, fair competition, andsecurity;long-term science policy issues—the networks’impacts on the process of science, and onaccess to and dissemination of valuable scien-tific and technical information.

THE NATIONAL RESEARCH ANDEDUCATION NETWORK (NREN)

“A universal communications network connectedto national and international networks enables elec-tronic communication among scholars anywhere inthe world, as well as access to worldwide informa-tion sources, special experimental instruments, andcomputing resources. The network has sufficientbandwidth for scholarly resources to appear to beattached to a world local area network.”

EDUCOM, 1988.66

. , , a national research network to provide a distributed computing capability that links the govemment,industry, and higher education communities.”

. OSTP, 1987.

“The goal of the National Research and EducationNetwork is to enhance national competitiveness andproductivity through a high-speed, high-qualitynetwork infrasoucture which supports a broad set ofapplications and network services for the research

and instructional community. ”EDUCOM/NTTF March 1989.

“The NREN will provide high-speed communica-tion access to over 1300 institutions across theUnited States within five years. It will offer suffi-cient capacity, performance, and functionality so thatthe physical distance between institutions is nolonger a barrier to effective collaboration. It willsupport access to high-performance computing fa-cilities and services . , . and advanced informationsharing and exchange, including national file sys-tems and online libraries . . . the NREN will evolvetoward fully supported commercial facilities thatsupport a broad range of applications and services.”

FRICC, Program Planfor the NREN, May 23, 1989.

This chapter of the background paper reviews thestatus of and issues surrounding data networking forscience, in particular the proposed NREN. It de-scribes current Federal activities and plans, andidentifies issues to be examined in the full report, tobe completed in summer 1990.

The existing array of scientific networks consistsof a hierarchy of local, regional and nationalnetworks, linked into a whole. In this paper,“NREN” will be used to describe the next generationof the national “backbone” that ties them together.The term “Internet” is used to describe a morespecific set of interconnected major networks, all ofwhich use the same data transmission protocols. Themost important are NSFNET and its major regionalsubnetworks, ARPANET, and several other feder-ally initiated networks such as ESNET andNASNET. The term internet is used fairly loosely.At its broadest, the more generic term internet can beused to describe the international conglomeration ofnetworks, with a variety of protocols and capabili-ties, which have a gateway into Internet; whichcould include such things as BITNET and MCI Mail.

The Origins of Research Networking

Research users were among the first to linkcomputers into networks, to share information andbroaden remote access to computing resources.DARPA created ARPANET in the 1960s for twopurposes: to advance networking and data communi-cations R&D, and to develop a robust communica-tions network that would support the data-richconversations of computer scientists. Building on

Page 32: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

23

the resulting packet-switched network technology,other agencies developed specialized networks fortheir research communities (e.g., ESNET, CSNETNSFNET), Telecommunications and electronic in-dustries provided technology and capacity for thesenetworks, but they were not policy leaders orinnovators of new systems. Meanwhile, other research-oriented networks, such as BITNET and Usenet,were developed in parallel by academic and industryusers who, not being grantees or contractors ofFederal agencies, were not served by the agency-sponsored networks. These university and lab-basednetworks serve a relatively small number of special -ized scientific users, a market that has been ignoredby the traditional telecommunications industry. Thenetworks sprang from the efforts of users—academic and other research scientists-and theFederal managers who were supporting them.l

The Growing Demand for Capability andConnectivity

Today there are thousands of computer networksin the United States. These networks range fromtempoary linkages between modem-equipped2 desk-top computers linked via common carriers, toinstitution-wide area networks, to regional andnational networks, Network traffic moves throughdifferent media, including copper wire and opticalcables, signal processors and switches, satellites,and the vast common carrier system developed forvoice communication. Much of this hodgepodge ofnetworks has been linked (at least in terms of abilityto interconnect) into the internet. The ability of anytwo systems to interconnect depends on their abilityto recognize and deal with the form informationflows take in each. These “protocols” are sets oftechnical standards that, in a sense, are the “lan-guages” of communication systems. Networks withdifferent protocols can often be linked together bycomputer-based “gateways” that translate the proto-cols between the networks.

National networks have partially coalesced, wheretechnology allows cost savings without losingconnectivity. Over the past years, several agencieshave pooled funds and plans to support a shared

national backbone. The primary driver for thisinterconnecting and coalescing of networks has beenthe need for connectivity among users. The power ofthe whole is vastly greater than the sum of the pieces.Substantial costs are saved by extending connectiv-ity while reducing duplication of network coverage.The real payoff is in connecting people, information,and resources. Linking brings users in reach of eachother. Just as telephones would be of little use if onlya few people had them, a research and educationnetwork’s connectivity is central to its usefulness,and this connectivity comes both from ability ofeach network to reach the desks, labs, and homes ofits users and the extent to which various networksare, themselves, interconnected.

The Present NREN

The national research and education network canbe viewed as four levels of increasingly complex andflexible capability:

● physical wire/fiberoptic common carrier ’’high-ways”;

. user-defined, packet-switched networks;

. basic network operations and services; and

. research, education, database, and informationservices accessible to network users

In a fully developed NREN, all of these levels ofservice must be integrated. Each level involvesdifferent technologies, services, policy issues, re-search opportunities, engineering requirements, cli-entele, providers, regulators, and policy issues. Amore detailed look at the policy problems can bedrawn by separating the NREN into its majorcomponents.

Level 1: Physical wire/fiber optic commoncarrier highways

The foundation of the network is the physicalconduits that carry digital signals. These telephonewires, optical fibers, microwave links, and satellitesare the physical highways and byways of datatransit. They are invisible to the network user. Toprovide the physical skeleton for the intemet,government, industry, and university network man-

I John S, ~e~an and Josiah C. Hoskins, “Notable Computer Networks,” Cornmurucations of the ACM, vol 29, No, 10, October 1986, pp. 932-971;John S. Quartennan, The Matrix Networks Around the World, Dlg]tiil Press, August 1989

2A ‘*Mod~m” ~onve~ ~fo~ation in a computer {o a form ~a[ a communication systcm can CqJ, and vice versa. h alSO autOma[eS SOme Simplefunctions, such as dialing and answering the phone, dctectmg and corrccung transmission errors.

Page 33: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

24

agers lease circuits from public switched commoncarriers, such as AT&T, MCI, GTE, and NTN. Indoing so they take advantage of the large system ofcircuits already laid in place by the telecommunica-tions common carriers for other telephony and datamarkets. A key issue at this level is to what extentbroader Federal agency and national telecommuni-cations policies will promote, discourage, or divertthe evolution of a research-oriented data network.

Level 2: User-defined subnetworks

The internet is a conglomeration of smallerforeign, regional, State, local, topical, private, gov-ernment, and agency networks. Generally, theseseparately managed networks, such as SURANET,BARRNET, BITNET, and EARN, evolved alongnaturally occurring geographic, topical, or userlines, or mission agency needs. Most of these logicalnetworks emerged from Federal research agency(including the Department of Defense) initiatives. Inaddition, there are more and more commercial, Stateand private, regional, and university networks (suchas Accunet, Telenet, and Usenet) at the same timespecialized and interlined. Many have since linkedthrough the Internet, while keeping to some extenttheir own technical and socioeconomic identity.This division into small, focused networks offers theadvantage of keeping network management close toits users; but demands standardization and somecentral coordination to realize the benefits of inter-connection.

Networks at this level of operations are distin-guished by independent management and technicalboundaries. Networks often have different standardsand protocols, hardware, and software. They carryinformation of different sensitivity and value. Thediversity of these logical subnetworks matters toinstitutional subscribers (who must choose amongnetwork offerings), to regional and national networkmanagers (who must manage and coordinate thesenetworks into an internet), and to users (who can findthe variety of alternatives confusing and difficult todeal with). A key issue is the management relation-ship among these diverse networks; to what extentis standardization and centralization desirable?

Level 3: Basic network operations and services

A small number of basic maintenance tools keepsthe network running and accessible by diverse,distributed users. These basic services are software-based, provided for the users by network operatorsand computer manufacture in operating systems.They include software for password recognition,electronic-mail, and file transfer. These are coreservices necessary to the operation of any network.These basic services are not consistent across thecurrent range of computers used by research. A keyissue is to what extent these services should bestandardized, and as important, who should makethose decisions.

Level 4: Value-added superstructure: links toresearch, education, and information services

The utility of the network lies in the information,services, and people that the user can access throughthe network. These value-added services providespecialized tools, information, and data for researchand education. Today they include specializedcomputers and software, library catalogs and publi-cation databases, archives of research data, confer-encing systems, and electronic bulletin boards andpublishing services that provide access to colleaguesin the United States and abroad. These informationresources are provided by volunteer scientists and bynon-profit, for-profit, international, and governmentorganizations. Some are amateur, poorly maintainedbulletin boards; others are mature information or-ganizations with well-developed services. Some are“free”; others recover costs through user charges.

Core policy issues are the appropriate roles forvarious information providers on the network. If thenetwork is viewed as public infrastructure, what is“fair” use of this infrastructure? If the network easesaccess to sensitive scientific data (whether rawresearch data or government regulatory databases),how will this stress the policies that govern therelationships of industry, regulators, lobbyists, andexperts? Should profit-seeking companies be al-lowed to market their services? How can we ensurethat technologies needed for network maintenance,cost accounting, and monitoring will not be usedinappropriately or intrusively? Who should setprices for various users and services? How willintellectual property rights be structured for elec-tronically available information? Who is responsible

Page 34: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

25

for the quality and integrity of the data provided andused by researchers on the network?

Research Networking as a StrategicHigh Technology Infrastructure

Research networking has dual roles. First, net-working is a strategic, high technology infrastruc-ture for science. More broadly applied, data net-working enables research, education, business, andmanufacturing, and improves the Nation’s knowl-edge competitiveness. Second, networking technol-ogies and applications are themselves a substantialgrowth area, meriting focused R&D.

Knowledge is the commerce of education andresearch. Today networks are the highways forinformation and ideas. The y expand access tocomputing, data, instruments, the research commun-ity, and the knowledge they create. Data areexpensive (relative to computing hardware) and areincreasingly created in many widely distributedlocations, by specialized instruments and enter-prises, and then shared among many separate users.The more effectively that research information isdisseminated to other researchers and to industry,the more effective is scientific progress and socialapplication of technological knowledge. An internetof networks has become a strategic infrastructure forresearch.

The research networks are also a testbed fordata communications technology. Technologiesdeveloped through the research networks are likelyto enhance productivity of all economic sectors, notjust university research. The federally supportedInternet has not only sponsored frontier-breakingnetwork research, but has pulled data-networkingtechnology with it. ARPANET catalyzed the devel-opment of packet-switching technology, which hasexpanded rapidly from R&D networking to multibil-lion-dollar data handling for business and financialtransactions. The generic technologies developedfor the Internet-hardware (such as high-speedswitches) and software for network management,routing, and user interface-will transfer readilyinto general data-networking applications. Gover-

nment support for applied research can catalyze andintegrate R&D, decrease risk, create markets fornetwork technologies and services, transcend eco-nomic and regulatory barriers, and accelerate earlytechnology development and deployment. This wouldnot only bolster U.S. science and education, butwould fuel industry R&D and help support themarket and competitiveness of the U.S. network andinformation services industry,

Governments and private industries the worldover are developing research networks, to enhanceR&D productivity and to create testbeds for highlyadvanced communications services and technolo-gies. Federal involvement in infrastructure is moti-vated by the need for coordination and nationallyoriented investment, to spread financial burdens,and promote social policy goals (such as furtheringbasic research).3 Nations that develop markets innetwork-based technologies and services will createinformation industry-based productivity growth.

Federal Coordination of the Evolving Internet

NREN plans have evolved rapidly. Congres-sional interest has grown; in 1986, Congress re-quested the Office of Science and TechnologyPolicy (OSTP) to report on options for networkingfor research and supercomputing.4 The resultingreport, completed in 1987 by the interagency FederalCoordinating Council for Science, Engineering, andTechnology (FCCSET), called for a new Federalprogram to create an advanced national researchnetwork by the year 2000.5 This vision incorporatedtwo objectives: 1 ) providing vital computer-communications network services for the Nationacademic research community, and 2) stimulatingnetworking and communications R&D which wouldfuel U.S. industrial technology and commerce in thegrowing global data communications market.

The 1987 FCCSET report, building on ongoingFederal activities, addressed near-term questionsover the national network scope, purposes, agencyauthority, performance targets, and budget. It did notresolve issues surrounding the long-term operationof a network, the role of commercial services in

3Conwe.lm~ Budget office, NW Dlrectlom for 1~ Na~”on’s p~~l( ~~r~, septcnl~r lq~~, p, X1 ]]: c~o, Ffd~rUl ~ofl( [es for ln~)mtrlu’ture

Management, June 1986,4pL 99-383, Aug. 21, 1986.50~p, A Re~eUch ~~ Deve[op~~ s~~egy for fflgh peflor~~e ~t)~utlng, NOV ?-(), 1987

Page 35: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

26

providing network operations and services, or inter-face with broader telecommunications policies.

A 1988 National Research Council report praisedongoing activities, emphasized the need for coordi-nation, stable funding, broadened goals and designcriteria, integrated management, and increased pri-vate sector involvement.6

FCCSET’S Subcommittee on Networking hassince issued a plan to upgrade and expand thenetwork. 7 In developing this plan, agencies haveworked together to improve and interconnect severalexisting networks. Most regional networks werejoint creations of NSF and regional consortia, andhave been part of the NSFNET world since theirinception. Other quasi-private, State, and regionalnetworks (such as CICNET, Inc., and CERFNET)have been started.

Recently, legislation has been reintroduced toauthorize and coordinate a national research net-work.8 As now proposed, a National Research andEducation Network would link universities, nationallaboratories, non-profit institutions and governmentresearch organizations, private companies doinggovernment-supported research and education, andfacilities such as supercomputers, experimentalinstruments, databases, and research libraries. Net-work research, as a joint endeavor with industry,would create and transfer technology for eventualcommercial exploitation, and serve the data-networking needs of research and higher educationinto the next century.

Players in the NREN

The current Internet has been created by Federalleadership and funding, pulling together a wide baseof university commitment, national lab and aca-demic expertise, and industry interest and technol-ogy. The NREN involves many public and privateactors. Their roles must be better delineated foreffective policy. Each of these actors has vestedinterests and spheres of capabilities. Key players are:

. universities, which house most end users;

networking industry, the telecommunications,data communications, computer, and informa-tion service companies that provide networkingtechnologies and services;State enterprises devoted to economic develop-ment, research, and education;industrial R&D labs (network users); andthe Federal Government, primarily the nationallabs and research-funding agencies

Federal funding and policy have stimulated thedevelopment of the Internet. Federal initiatives havebeen well complemented by States (through findingState networking and State universities’ institutionaland regional networking), universities (by fundingcampus networking), and industry (by contributingnetworking technology and physical circuits atsharply reduced rates). End users have experienceda highly subsidized service during this “experimen-tal” stage. As the network moves to a bigger, moreexpensive, more established operation, how mightthese relative roles change?

Universities

Academic institutions house teachers, research-ers, and students in all fields. Over the past fewdecades universities have invested heavily in librar-ies, local computing, campus networks, and regionalnetwork consortia. The money invested in campusnetworking far outweighs the investment in theNSFNET backbone. In general, academics view theNREN as fulfillment of a longstanding ambition tobuild a national system for the transport of informa-tion for research and education. EDUCOM has longlabored from the “bottom” up, bringing togetherresearchers and educators who used networks (orbelieved they could use them) for both research andteaching.

Networking Industry

There is no simple unified view of the NREN inthe fragmented telecommunications “industry.” Thelong-distance telecommunications common carriersgenerally see the academic market as too specializedand risky to offer much of a profit opportunity.

6Nauon~ Re=Mch COMC,l, Tward ~ Natio~/ Rese~c,h Nefw~rk (Wash] nson, DC, NationaJ ~ademy Press, 1988), especiidly pp. 25-37.

TFCCSET or F~era] Cwrdnatlng COMC1} for Science, ~glnwr~g, ~d ~hnology, The Federa/ High Perjormnce Compunng Progrurn,

Washington, DC, OSTP, Sep[. 8, 1989.ES, 1067, ‘I~e Natlon~ High-peflommce Compukr ~~oloB ~[ of ]989,” May 1989, inwoduc~ by Mr. Gore, Hearings were held on June

21, 1989, H,R, 3131, “The National High-Performance Computer Technology Act of 1989,” introduced by Mr. Walgren.

Page 36: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

27

However, companies have gained early experiencewith new technologies and applications by partici-pating in university R&D; it is for this reason thatindustry has jointly funded the creation and develop-ment of NSFNET.

Various specialized value-added common carriersoffer packet-switched services. They could in princi-ple provide some of the same services that the NRENwould provide, such as electronic mail. They are not,however, designed to meet the capacity require-ments of researchers, such as transferring vast filesof supercomputer-generated visualizations of weathersystems, simulated airplane test flights, or econo-metric models. Nor can common carriers provide the“reach” to all carriers.

States

The interests of States in research, education, andeconomic development parallel Federal concerns.Some States have also invested in informationinfrastructure development. Many States have in-vested heavily in education and research network-ing, usually based in the State university system andencompassing, to varying degrees, private universi-ties, State government, and industry. The State is a“natural” political boundary for network financing.In some States, such as Alabama, New York, NorthCarolina, and Texas, special initiatives have helpedcreate statewide networks.

Industry Users

There are relatively few industry users of theinternet; most are very large R&D-intensive compa-nies such as IBM and DEC, or small high-technology companies. Many large companies haveinternal business and research networks which linktheir offices and laboratories within the UnitedStates and overseas; many also subscribe to com-mercial services such as MCI Mail. However, theseproprietary and commercial networks do not providethe internet’s connectivity to scientists or the highbandwidth and services so useful for researchcommunications. Like universities and nationallabs, companies are a part of the Nation’s R&Dendeavor; and being part of the research communitytoday includes being “on” the internet. Appropriateindustry use of the NREN should encourage interac-tion of industry, university, and government re-searchers, and foster technology transfer. Industry

internet users bring with them their own set ofconcerns such as cost accounting, proper networkuse, and information security. Other non-R&Dcompanies, such as business analysts, also are likelyto seek direct network connectivity to universities,government laboratories, and R&D-intensive com-panies.

Federal

Three strong rationales-support of mission andbasic science, coordinating a strategic nationalinfrastructure, and promotion of data-networkingtechnology and industrial productivity-drive asubstantial, albeit changing, Federal involvement.Another more modest goal is to rationalize duplica-tion of effort by integrating, extending, and moder-nizing existing research networks. That is in itselfquite important in the present Federal budgetaryenvironment. The international nature of the net-work also demands a coherent national voice ininternational telecommunications standardization.The Internet’s integration with foreign networks alsojustifies Federal concern over the international flowof militarily or economically sensitive technicalinformation. The same university-government-industry linkages on a domestic scale drive Federalinterests in the flow of information.

Federal R&D agencies’ interest in research net-working is to enhance their external research supportmissions. (Research networking is a small, special-ized part of agency telecommunications. It is de-signed to meet the needs of the research community,rather than agency operations and administrativetelecommunications that are addressed in FTS2000.) The hardware and software communicationstechnologies involved should be of broad commer-cial importance. The NREN plans reflect nationalinterest in bolstering a serious R&D base and acompetitive industry in advanced computer commu-nications.

The dominance of the Federal Government innetwork development means that Federal agencyinterests ha-e strongly influenced its form andshape. Policies can reflect Federal biases; for in-stance, the limitation of access to the early AR-PANET to ARPA contractors left out many academ-ics, who consequent y created their own grass-roots,lower-capability BITNET.

Page 37: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

28

International actors are also important. As withthe telephone system, the internet is inherentlyinternational. These links require coordination, forexample for connectivity standards, higher levelnetwork management, and security. This require-ment implies the need for Federal level managementand policy.

The NREN in the InternationalTelecommunications Environment

The nature and economics of an NREN willdepend on the international telecommunicationscontext in which it develops. Research networks area leading edge of digital network technologies, butare only a tiny part of the communications andinformation services markets.

The 1990s will be a predominantly digital world;historically different computing, telephony, andbusiness communications technologies are evolvinginto new information-intensive systems. Digitaltechnologies are promoting systems and marketintegration. Telecommunications in the 1990s willrevolve around flexible, powerful, “intelligent” net-works. However, regulatory change and uncertainty,market turbulence, international competition, theexplosion in information services, and significantchanges in foreign telecommunications policies, allare making telecommunications services more tur-bulent. This will cloud the research network’slong-term planning.

High-bandwidth, packet-switched networking isat persent a young market in comparison to commer-cial telecommunications. Voice overwhelminglydominates other services (e.g. fax, e-mail, on-linedata retrieval). While flexible, hybrid voice-dataservices are being introduced in response to businessdemand for data services, the technology base isoptimized for voice telephony.

Voice communications brings to the world ofcomputer telecommunications complex regulatoryand economic baggage. Divestiture of the AT&Tregulated monopoly opened the telecommunicationsmarket to new entrants, who have slowly gainedlong-haul market share and offered new technolo-gies and information services. In general, however,the post-divestiture telecommunications industryremains dominated by the descendants of oldAT&T, and most of the impetus for service innova-

tions comes from the voice market. One reason isuncertainty about the legal limits, for providinginformation services, imposed on the newly divestedcompanies. (In comparison, the computer industryhas been unregulated. With the infancy of thetechnology, and open markets, computer R&D hasbeen exceptionally productive,) A crucial concernfor long-range NREN planning is that scientificand educational needs might be ignored amongthe regulations, technology priorities, and eco-nomics of a telecommunications market gearedtoward the vast telephone customer base.

POLICY ISSUESThe goal is clear; but the environment is

complex, and the details will be debated as thenetwork evolves

There is substantial agreement in the scientificand higher education community about the pressingnational need for a broad-reaching, broad-bandwidth, state-of-the-art research network, Theexisting Internet provides vital communication,research, and information services, in addition to itsconcomitant role in pushing networking and datahandling technology, Increasing demand on networkcapacity has quickly saturated each network up-grade. In addition, the fast-growing demand isoverburdening the current informal administrativearrangements for running the Internet. Expandedcapability and connectivity will require substantialbudget increases. The current network is adequatefor broad e-mail service and for more restricted filetransfer, remote logon, and other sophisticated uses.Moving to gigabit bandwidth, with appropriatenetwork services, will demand substantial techno-logical innovation as well as investment.

There are areas of disagreement and even broaderareas of uncertainty in planning the future nationalresearch network. There are several reasons for this:the immaturity of data network technology, serv-ices, and markets; the Internet’s nature as strategicinfrastructure for diverse users and institutions;and the uncertainties and complexities of overridingtelecommunications policy and economics.

First, the current Internet is, to an extent, anexperiment in progress, similar to the early days ofthe telephone system. Technologies, uses, and po-tential markets for network services are still nascent.

Page 38: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

29

Patterns of use are still evolving; and a reliablenetwork has reached barely half of the researchcommunity. Future uses of the network are difficultto identify; each upgrade over the past 15 years hasbrought increased value and use as improved net-work capacity and access have made new applica-tions feasible.

The Internet is a conglomeration of networks thatgrew up ad hoc. Some, such as ARPANET, CSNET,and MFENET, were high-quality national networkssupported by substantial Federal funding. Othersmaller networks were built and maintained by thelate-night labors of graduate students and computercenters operators. One of these, BITNET, hasbecome a far-reaching and widely used universitynetwork, through the coordination of EDUCOM andsupport of IBM. The Internet has since become amore coherent whole, under Federal coordinationled by NSF and DARPA and advised by the InternetActivities Board. Improvements in service andconnectivity have been astounding. Yet the patch-work nature of the Internet still dominates; somecampus and regional networks are high quality andwell maintained; others are lower speed, less relia-ble, and reach only a few institutions in their region.Some small networks are gatewayed into the In-ternet; others are not. This patchwork nature limitsthe effectiveness of the Internet, and argues for betterplanning and stronger coordination.

Second, the network is a strategic infrastructure,with all the difficulties in capitalizing, planning,financing, and maintaining that seem to attend anyinfrastructure. 9 Infrastructures tend to suffer from a“commons” problem, leading to continuing underin-vestment and conflict over centralized policy. By itsnature the internet has many diverse users, withdiverse interests in and demands on the network. Thenetwork’s value is in linking and balancing the needsof these many users, whether they want advancedsupercomputer services or merely e-mail. Someusers are network-sophisticated, while many userswant simple, user-friendly communications. Thisdiversity of users complicates network planning andmanagement. The scope and offerings of the net-work must be at least sketched out before a

management structure appropriate to the desiredmission is established.

Third, the network is part of the telecommunica-tions world, rampant with policy and economicconfusion. The research community is small, withspecialized data needs that are subsidiary to largermarkets. It is not clear that science’s particularnetworking needs will be met.

Planning Amidst Uncertainty

Given these three large uncertainties, there is nostraightforward or well-accepted model for the“best” way to design, manage, and upgrade thefuture national research network. Future network usewill depend on cost recovery and charging practices,about which very little is understood. These uncer-tainties should be accommodated in the design ofnetwork management as well as the network itself.

One way to clarify NREN options might be tolook at experiences with other infrastructures (e.g.,waterways, telephones, highways) for lessons abouthow different financing and charging policies affectwho develops and deploys technology, how fasttechnology develops, and who has access to theinfrastructure. Additionally, some universities arebeginning trials in charging for network services;these should provide experience in how variouscharging practices affect usage, technology deploy-ment and upgrading, and the impacts of network usepolicies on research and education at the level of theinstitution.

Table 3-1 lists the major areas of agreement anddisagreement in various “models” of the proper formof network evolution.

Network Scope and Access

Scope

Where should an NREN reach: beyond research-intensive government laboratories and universitiesto all institutions of higher education? high schools?nonprofit and corporate labs? Many believe thateventually— perhaps in 20 years-de facto datanetworking will provide universal linkage, akin to asophisticated phone system.

yconvewlm~ Budget office, NW Dire(,llom for the Na/IOn’s Public Works, September 1988; Nauonal COMCI1 on mblic w’or~ ~Provement,Fragile Foundation A Report on America’s Pubk Works, Washington, IX, February 1988.

Page 39: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

30

Table 3-1-Principal Policy Issues in Network Development

Major areas of agreement Major areas of disagreement and uncertainty

Scope and access1. The national need for a broad state-of-the-art research network

that links basic research, government, and higher education.

Policy and management structure2. The need for a more formal mechanism for planning and

operating the NREN, to supersede and better coordinateinformal interagency cooperation and ad hoc university andState participation, and for international coordination.

la. The exact scope of the NREN; whether and how to controldomestic and foreign access.

lb. Hierarchy of network capability. Cost and effort limit the reachof state-of-the-art networking; an “appropriate networking”scenario would have the most intensive users on a leadingedge network and less demanding users on a lower-costnetwork that suffices for their needs. Where should those linesbe drawn, and who should draw them? How can the FederalGovernment ensure that the gap between leading edge andcasual is not too large, and that access is appropriate andequitable?

2a. The form and function of an NREN policy and managementauthority; the extent of centralization, particularly the role ofFederal Government; the extent of participation of industryusers, networking industry, common carriers, and universitiesin policy and operations; mechanisms for standard setting,

Financing and coat recovery3. The desirability of moving from the current “market- 3a. How the transition to commmercial operations and charging can

establishing” environment of Federal and State grants and and should be made; more generally, Federal-private sectorsubsidies, with services “free” to users, to more formal cost roles in network policy and pricing; how pricing practices willrecovery, shifting more of the cost burden and financial shape access, use, and demand.incentives to end users.

Network Use4. The desirability of realizing the potential of a network; the need 4a. Who should be able to use the network for what purposes, and

for standards and policies to link to information services, at what entry rest; the process of guiding economic structuredatabases, and nonresearch networks. of services, subsidies, price of for multi-product services;

intellectual property policies.

S0URCE: Office of Technology Assessment, 1989.

The appropriate breadth of the network is unlikelyto be fully resolved until more user communitiesgain more experience with networking, and a betterunderstanding is gained of the risks and benefits ofvarious degrees of network coverage. A balancemust be struck in network scope, which provides asmall network optimized for special users (such asscientists doing full-time, computationally intensiveresearch) and also a broader network serving morediverse users. The scope of the internet, and capabil-ities of the networks encompassed in the internet,will need to balance the needs of specialized userswithout diluting the value for top-end and low-endusers. NREN plans, standards, and technologyshould take into account the possibility of laterexpansion and integration with other networks andother communities currently not linked up. After-the-fact technical patches are usually inefficient andexpensive. This may require more government

participation in standard-setting to make it feasiblefor currently separated communities, such as highschools and universities, to interconnect later on.

Industry-academic boundaries are of particularconcern. Interconnection generally promotes re-search and innovation. Companies are dealing withrisk of proprietary information release by maintain-ing independent corporate networks and by restrict-ing access to open networks. How can funding andpricing be structured to ensure that for-profit compa-nies bear an appropriate burden of network costs?

Access

Is it desirable to restrict access to the internet?Who should control access? Open access is desiredby many, but there are privacy, security, andcommercial arguments for restricting access. Re-stricting access is difficult, and is determined moreby access controls (e.g., passwords and monitoring)

Page 40: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

31

on the computers that attach users to the network,than by the network itself. Study is needed onwhether and how access can be controlled bytechnical fixes within the network, by computercenters attached to the network, informal codes ofbehavior, or laws,

Another approach is not to limit access, butminimize the vulnerability of the network—and itsinformation resources and users-to accidents ormalice. In comparison, essentially anyone who hasa modest amount of money can install a phone, oruse a public phone, or use a friend’s phone, andaccess the national phone system. However, crimi-nal, fraudulent, and harassing uses of the phonesystem are illegal. Access is unrestricted, but use isgoverned.

Controlling International Linkages

Science, business, and industry are international;their networks are inherently international. It isdifficult to block private telecommunications linkswith foreign entities, and public telecommunica-tions is already international. However, there is afundamental conflict between the desire to captureinformation for national or corporate economic gain,and the inherent openness of a network. Scientistsgenerally argue that open network access fostersscientifically valuable knowledge exchange, whichin turn leads to commercially valuable innovation.

Hierarchy of Network Capability

Investment in expanded network access must bebalanced continually with the upgrading of networkperformance, As the network is a significant com-petitive advantage in research and higher education,access to the “best” network possible is important.There are also technological considerations in link-ing networks of various performance levels andvarious architectures. There is already a consensusthat there should be a separate testbed or researchnetwork for developing and testing new networktechnologies and services, which will truly be at thecutting edge (and therefore also have the weaknessesof cutting edge technology, particularly unreliabilityand difficulty of use).

Policy and Management Structure

Possible management models include: federallychartered nonprofit corporations, single lead agen-cies, interagency consortium, government-ownedcontractor operations, commercial operations; andTennessee Valley Authority, Atomic Energy Com-mission, the NSF Antarctic Program, and FannieMae. What are the implications of various scenariosfor the nature of traffic and users?

Degree of Centralization

What is the value of centralized, federally ac-countable management for network access control,traffic management and monitoring, and security,compared to the value of decentralized operations,open access and traffic? There are two key technicalquestions here: to what extent does network tech-nology limit the amount of control that can beexerted over access and traffic content? To whatextent does technology affect the strengths andweaknesses of centralized and decentralized mana-gement?

Mechanisms for Interagency Coordination

Interagency coordination has worked well so far,but with the scaling up of the network, more formalmechanisms are needed to deal with larger budgetsand to more tightly coordinate further development.

Coordination With Other Networks

National-level resources allocation and planningmust coordinate with interdependent institutionaland mid-level networking (the other two legs ofnetworking).

Mechanisms for Standard Setting

Who should set standards, when should they beset, and how overarching should they be? Standardsat some common denominator level are absolutelynecessary to make networks work. But excessivestandardization may deter innovation in networktechnology, applications and services, and otherstandards.

Any one set of standards usually is optimal forsome applications or users, but not for others. Thereare well-established international mechanisms forformal standards-setting, as well as strong intern-ational involvement in more informal standards

Page 41: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

32

development. These mechanisms have worked well,albeit slowly. Early standard-setting by agencies andtheir advisers accelerated the development of U.S.networks, In many cases the early establishedstandards have become, with some modification, defacto national and even international standards. Thisis proving the case with ARPANET’s protocol suite,TCP/IP. However, many have complained thatagencies’ relatively precipitous and closed standardsdetermination has resulted in less-than-satisfactorystandards. NREN policy should embrace standards-setting. Should it, however, encourage wider partici-pation, especially by industry, than has been thecase? U.S. policy must balance the need for intern-ational compatibility with the furthering of nationalinterests.

Financing and Cost Recovery

How can the capital and operating costs of theNREN be met? Issues include subsidies, user oraccess charges, cost recovery policies, and costaccounting. As an infrastructure that spans disci-plines and sectors, the NREN is outside the tradi-tional grant mechanisms of science policy. Howmight NREN economics be structured to meet costsand achieve various policy goals, such as encourag-ing widespread yet efficient use, ensuring equity ofaccess, pushing technological development whilemaintaining needed standards, protecting intellec-tual property and sensitive information while en-couraging open communication, and attracting U.S.commercial involvement and third-party informa-tion services?

Creating a Market

One of the key issues centers around the extent towhich deliberate creation of a market should be builtinto network policy, and into the surroundingscience policy system. There are those who believethat it is important that the delivery of networkaccess and services to academics eventually becomea commercial operation, and that the current Federalsubsidy and apparently “free” services will getacademics so used to free services that there willnever be a market. How do you gradually create aninformation market, for networks, or for network-accessible value-added services?

Funding and Charge Structures

Financing issues are akin to ones in more tradi-tional infrastructures, such as highways and water-ways. These issues, which continue to dominateinfrastructure debates, are Federal private sectorroles and the structure of Federal subsidies andincentives (usually to restructure payments andaccess to infrastructure services). Is there a continu-ing role for Federal subsidies? How can universityaccounting, OMB circular A-21, and cost recoverypractices be accommodated?

User fees for network access are currently chargedas membership/access fees to institutions. End usersgenerally are not charged. In the future, user feesmay combine access/connectivity fees, and use-related fees. They may be secured via a trust fund (asis the case with national highways, inland water-ways, and airports), or be returned directly tooperating authorities. A few regional networks (e.g.,CICNET, Inc.) have set membership/connectivityfees to recover full costs. Many fear that user fees arenot adequate for full funding/cost recovery.

Industry Participation

Industry has had a substantial financial role innetwork development. Industry participation hasbeen motivated by a desire to stay abreast ofdata-networking technology as well as a desire todevelop a niche in potential markets for researchnetworking. It is thus desirable to have significantindustry participation in the development of theNREN. Industry participation does several things:industry cost sharing makes the projects financiallyfeasible; industry has the installed long-haul tele-communications base to build on; and industryinvolvement in R&D should foster technologytransfer and, generally, the competitiveness of U.S.telecommunications industry. Industry in-kind con-tributions to NSFNET, primarily from MCI andIBM, are estimated at $40 million to $50 millioncompared to NSF’s 5 year, $14 million budget. 10 Itis anticipated that the value of industry cost sharing(e.g., donated switches, lines, or software) for NRENwould be on the order of hundreds of millions ofdollars.

1~]1~ MU~~, *’NSF ~ns High-S@ Computer Network,” S~ienCt?, p. 22.

Page 42: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

33

Network Use

Network service offerings (e.g., databases anddatabase searching services, news, publication, andsoftware) will need some policy treatment. Thereneed to be incentives to encourage development ofand access to network services, yet not undulysubsidize such services, or compete with privatebusiness, while maintaining quality control. Manynetwork services used by scientists have been “free”to the end user.

Economic and legal policies will need to beclarified for reference services, commercial infor-mation industry, Federal data banks, university dataresources, libraries, publishers, and generally allpotential services offered over the network.11 Thesepolicies should be designed to encourage use ofservices, while allowing developers to capture thepotential benefits of network services and ensurelegal and economic incentives to develop andmarket network services.

Longer Term Science Policy Issues

The near-term technical implementation of theNREN is well laid out. However, longer-term policyissues will arise as the national network affects moredeeply the conduct of science, such as:

patterns of collaboration, communication andinformation transfer, education, and appren-ticeship;intellectual property, the value and ownershipof information;export control of scientific informationpublishing of research resultsthe “productivity” of research and attempts tomeasure itcommunication among scientists, particularlyacross disciplines and between university, gov-ernment, and industry scientists.potential economic and national security risksof international scientific networking, collabo-ration, and scientific communication;equity of access to scientific resources, such asfacilities, equipment, databases, researchgrants, conferences, and other scientists. (Will

a fully implemented NREN change the concen-tration of academic science and Federal fund-ing in a limited number of departments andresearch universities, and of corporate sciencein a few large, rich corporations; what might bethe impacts of networks on traditional routes toscientific priority and prestige?)controlling scientific information flow. Whattechnologies and authority to control network-resident scientific information? How mightthese controls affect misconduct, quality con-trol, economic and corporate proprietary pro-tection, national security, and preliminary re-lease of tentative or confidential research infor-mation that is scientifically or medically sensi-tive?cost and capitalization of doing research; towhat extent might networking reduce the needfor facilities or equipment?oversight and regulation of science, such asquality control, investigations of misconduct,research monitoring, awarding and auditing ofgovernment grants and contracts, data collec-tion, accountability, and regulation of researchprocedures.

12 Might national networking ena-

ble or encourage new oversight roles forgovernments?the access of various publics to scientists andresearch information;the dissemination of scientific information,from raw data, research results, drafts of papersthrough finished research reports and reviews;might some scientific journals be replaced byelectronic reports?legal issues, data privacy, ownership of data,copyright. How might national networkinginteract with trends already underway in thescientific enterprise, such as changes in thenature of collaboration, sharing of data, andimpacts of commercial potential on scientificresearch? Academic science traditionally hasemphasized open and early communication,but some argue that pressures from competitionfor research grants and increasing potential forcommercial value from basic research have

1 low, Cuculu A. 130, Ij~ F~~~~ R~~i~t~~ 52730 (~, 24, 1985); A. 1 ~() H,R. 2381, The Information policy ACI of 1989, which restates the ro~c

of OMB and policles on governmen( reformation dissemination.12u, s, Conqess, office of ~~o]o~ A~\essment, The Regu~to~ En}tlro~en/f~r ,$~[e~~e, OTA-TM-SET-34 (Wtimgton, DC: U.S. Government

Prirmng Office, February 1986).

Page 43: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

34

dampened free communication. Might net-works counter, or strengthen, this trend?

Technical Questions

Several unresolved technical challenges are im-portant to policy because they will help determinewho has access to the network for what purposes.Such technical challenges include:

standards for networks and network-accessibleinformation services;requirements for interface to common carriers(local through international);requirements for interoperability across manydifferent computes;improving user interfaces;reliability and bandwidth requirements;methods for measuring access and usage, tocharge users that will determine who is mostlikely to pay for network operating costs; andmethods to promote security, which will affectthe balance-between network and informationvulnerability, privacy, and open access.

Federal Agency Plans: FCCSET/FRICC

A recently released plan by the Federal ResearchInternet Coordinating Committee (FRICC) outlinesa technical and management plan for NREN.13 Thisplan has been incorporated into the broader FCCSETimplementation plan. The technical plan is wellthought through and represents further refinement ofthe NREN concept. The key stages are:

Stage 1:

Stage 2:

Stage 3:

upgrade and interconnect existing agencynetworks into a jointly funded andmanaged T1 (1.5 Mb/s) National Net-working Testbed. 14

integrate national networks into a T3 (45Mb/s) backbone by 1993.push a technological leap to a multigiga-bit NREN starting in the mid-1990s.

The proposal identifies two parts of an NREN, anoperational network and networking R&D. A serv-ice network would connect about 1,500 labs and

universities by 1995, providing reliable service andrapid transfer of very large data streams, such as arefound in interactive computer graphics, in apparentreal time. The currently operating agency networkswould be integrated under this proposal, to create ashared 45Mb/s service net by 1992. The second partof the NREN would be R&D on a gigabit network,to be deployed in the latter 1990s. The first part isprimarily an organizational and financial initiative,requiring little new technology. The second involvesmajor new research activity in government andindustry.

The “service” initiative extends present activitiesof Federal agencies, adding a governance structurewhich includes the non-Federal participants (re-gional and local networking institutions and indus-try), in a national networking council, It formalizeswhat are now ad-hoc arrangements of the FRICC,and expands its scale and scope. Under this effort,virtually all of the Nation’s research and highereducation communities will be interconnected. Traf-fic and traffic congestion will be managed viapriority routing, with service for participating agen-cies guaranteed via “policy” routing techniques. Thebenefits will be in improving productivity forresearchers and educators, and in creating anddemonstrating the demand for networks and networkservices to the computing, telecommunications, andinformation industries.

The research initiative (called stage 3 in theFCCSET reports) is more ambitious, seeking sup-port for new research on communications technolo-gies capable of supporting a network that is at leasta thousand times faster than the 45Mb/s net. Such anet could use the currently unused capabilities ofoptical fibers to vastly increase effective capabilityand capacity, which are congested by today’stechnology for switching and routing, and supportthe next generation of computers and communicat-ions applications. This effort would require asubstantial Federal investment, but could invigoratethe national communication technology base, andboost the long-term economic competitiveness of

13FIUCC, Progrm Plmfor the Naiodf?esearch and Educatwn Network, May 23, 1989. FRICC hm members from DHHS, DOE, DwA, USGS,NASA, NSF, NOAA, and observers from the Internet Activities Board. FRICC is an informal committee that grew out of agencies’ shared interest incoordinating related network activities and avoiding duplication of resourees. As the de facto interagency coordination forum, FRICC was asked by NSFto prepare the NREN program plan.

14sw ~w NysE~~ NOTE, vol. 1, No. 1, Feb, 6, 1989 NysER~ h= &n aw~d~ a multfii]]ion-doll~ Confrwt from DARPA to developtbe National Networking 7ksIbed.

Page 44: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

35

the telecommunications and computing industries.The gigabit network demonstration can be consid-ered similar to the Apollo project for communica-tions technologies, albeit on a smaller and lessspectacular scale. Technical research needed wouldinvolve media, switches, network design and controlsoftware, operating systems in connected comput-ers, and applications.

There are several areas where the FRICC manage-ment plan-and other plans-is unclear. It calls for,but does not detail any transition to commercialoperations. It does not outline potential structures forlong-term financing or cost recovery. And thenational network council’s formal area of responsi-bility is limited to Federal agency operations. Whilethis scope is appropriate for a Federal entity, and theprivate sector has participated influentially in pastFederal FRICC plans, the proposed council does notencompass all the policy actors that need to partici-pate in a coordinated national network. The growthof non-Federal networks demonstrates that someinterests—such as smaller universities on the fringesof Federal-supported R&D-have not been served.The FRICC/FCCSET implementation plan for net-working research focuses on the more near-termmanagement problems of coordinated planning andmanagement of the NREN. It does not deal with twoextremely important and complex interfaces. At themost fundamental level, the common carriers, thenetwork is part of the larger telecommunicationslabyrinth with all its attendant regulations, vestedinterests, and powerful policy combatants. At the toplevel, the network is a gateway into a globalinformation supermarket. This marketplace of infor-mation services is immensely complex as well aspotentially immensely profitable, and policy andregulation has not kept up with the many newopportunities created by technology.

The importance of institutional and mid-levelnetworking to the performance of a national net-work, and the continuing fragmentation and regula-tory and economic uncertainty of lower-level net-working, signals a need for significant policyattention to coordinating and advancing lower-levelnetworking. While there is a formal advisory role foruniversities, industry, and other users in the FRICCplan, it is difficult to say how and how well their

interests would be represented in practice. It is notclear what form this may take, or whether it willnecessitate some formal policy authority, but thereis need to accommodate the interests of universities(or some set of universities), industry research labs,and States in parallel to a Federal effort. Theconcerns of universities and the private sector abouttheir role in the national network are reflected inEDUCOM’S proposal for an overarching Federal-private nonprofit corporation, and to a lesser extentin NRI’s vision. The FRICC plan does not excludesuch a broader policy-setting body, but the currentplan stops with Federal agency coordination.

Funding for the FRICC NREN, based on theanalysis that went into the FCCSET report, isproposed at $400 million over 5 years, as shownbelow. This includes all national backbone Federalspending on hardware, software, and research,which would be funneled through DARPA and NSFand overseen by an interagency council. It includessome continued support for mid-level or institu-tional networking, but not the value of any costsharing by industry, or specialized network R&D byvarious agencies. This budget is generally regardedas reasonable and, if anything, modest consideringthe potential benefits (see table 3-2).15

AREN Management Desiderata

All proposed initiatives share the policy goal ofincreasing the nation’s research productivity andcreating new opportunities for scientific collabora-tion. As a technological catalyst, an explicit nationalNREN initiative would reduce unacceptably highlevels of risk for industry and help create newmarkets for advanced computer-communicationsservices and technologies. What is needed now is asustained Federal commitment to consolidate andfortify agency plans, and to catalyze broader na-tional involvement. The relationship between science-oriented data networking and the broader telecom-munications world will need to be better sorted outbefore the NREN can be made into a partly or fullycommercial operation. As the engineering challengeof building a fully national data network is sur-mounted, management and user issues of econom-ics, access, and control of scientific information willrise in importance.

15 For ~xmple, N~ion~ Rese~ch Comcil, Toward u NUrLOnUl Research Nerwork (Washington, DC: Naloml kademy fieSS 19~8)! PP. 2~-31

Page 45: High Performance Computing and Networking for ScienceArtificial Intelligence Lab Microelectronics and Computer Technology Corp. Sharon J. Rogers University Librarian ... National Center

36

Table 3-2-Proposed NREN Budget ($ millions)

FY90 FY91 FY92 FY93 FY94

FCCSET Stage 1 & 2 (upgrade; NSF) . . . . . . . . . . 14 23 55 50 50FCCSET Stage 3 (gigabit+; DARPA) . . . . . . . . , . . 16 27 40 55 60

Total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 50 95 95 110S. 1067 authorization . . . . . . . . . . . . . . . . . . . . . . . . 50 50 100 100 100HR. 3131 authorization . . . . . . . . . . . . . . . . . . . . . . 50 50 100 100 100SOURCE: Office of Technology Assessment, 1989.

The NREN is a strategic, complex infrastructurewhich requires long-term planning. Consequently,network management should be stable (insulatedfrom too much politics and budget vagaries), yetallow for accountability, feedback, and course cor-rection. It should be able to leverage funding,maximize cost effciency, and create incentives forcommercial networks. Currently, there is no singleentity that is big enough, risk-protected enough, andregulatory-free enough to make a proper nationalnetwork happen. While there is a need to formalizecurrent policy and management, there is concern thatsetting a strong federally focused structure in placemight prevent a move to a more desirable, effective,appropriate management system in the long run.

There is need for greater stability in NREN policy.The primary vehicle has been a voluntary coordinat-ing group, the FRICC, consisting of program offi-cers from research-oriented agencies, working withinagency missions with loose policy guidance fromthe FCCSET. The remarkable cooperation andprogress made so far depends on a complex set ofagency priorities and budget fortunes, and continuedprogress must be considered uncertain.

The pace of the resolution of these issues will becontrolled initially by the Federal budget of eachparticipating agency. While the bulk of the overallinvestment rests with midlevel and campus net-works, it cannot be integrated without strong centralcoordination, given present national telecommuni-cations policies and market conditions for therequired network technology. The relatively modestinvestment proposed by the initiative can have majorimpact by providing a forum for public-privatecooperation for the creation of new knowledge, anda robust and willing experimental market to test newideas and technologies.

For the short term there is a clear need to maintainthe Federal initiative, to sustain the present momen-tum, to improve the technology, and coordinate theexpanding networks. The initiative should acceler-ate the aggregation of a sustainable domestic marketfor new information technologies and services.These goals are consistent with a primary purpose ofimproving the data communications infrastructurefor U.S. science and engineering,