summary of topics

Post on 12-Jan-2016

21 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

National Networking Update Basil Irwin Senior Network Engineer National Center for Atmospheric Research SCD Network Engineering and Technology Section (NETS) January 27, 1999. Summary of Topics. Review status of Next Generation Internet (NGI) Initiative Review status of NSF’s vBNS Network - PowerPoint PPT Presentation

TRANSCRIPT

1

National Networking Update

Basil IrwinSenior Network Engineer

National Center for Atmospheric Research

SCD Network Engineering and Technology Section (NETS)

January 27, 1999

Supercomputing • Communications • Data

NCAR Scientific Computing Division

2

Summary of Topics

• Review status of Next Generation Internet (NGI) Initiative

• Review status of NSF’s vBNS Network

• Review status of UCAID’s Abilene Network (Internet2)

• Review the Gigapop concept

• Review Front Range GigaPop (FRGP) status

Supercomputing • Communications • Data

NCAR Scientific Computing Division

3

NGI Initiative Review

Supercomputing • Communications • Data

NCAR Scientific Computing Division

4

What Is the NGI Initiative?

• The NGI Initiative is a plan to implement the President's Oct 10, 1996 statement in Knoxville, TN of his "commitment to a new $100 million initiative, for the first year, to improve and expand the Internet . . .”

• The Next Generation Internet (NGI) IS NOT a network

• It’s a Presidential funding initiative

• The next step in Federal funding for seeding the evolving US networking infrastructure

• Goal was to provide $100 million annual initiative of money for 3 years

• Plan is at: www.ccic.gov/ngi/concept-Jul97

Supercomputing • Communications • Data

NCAR Scientific Computing Division

5

6

How Will The NGI Initiative Work

• Federal research agencies with existing mission-oriented networks to take the lead

• Built on Federal/private partnerships:– Between advanced technology researchers and advanced

application researchers

– Between federally-funded network testbeds and commercial network service and equipment providers

• Requires substantial private-sector matching funds.– Two to one ratio of private to Federal funds

Supercomputing • Communications • Data

NCAR Scientific Computing Division

7

8

FY1998 Funding

• $85 million:– $42 DARPA

– $23 NSF

– $10 NASA

– $5 NIST

– $5 National Library of Medicine/National Institute of Health

• DOE to be added in FY1999

• $109 million proposed for FY1999

Supercomputing • Communications • Data

NCAR Scientific Computing Division

9

Three NGI Initiative Goals

• Main NGI goal is to advance three networking areas:– Goal 1: Advanced network technologies (e.g., protocols to

transfer data that is being browsed)

– Goal 2: Advanced network infrastructure (e.g., wires and boxes that transmit the browsed data)

– Goal 3: Revolutionary applications (e.g., Web browsers)

Supercomputing • Communications • Data

NCAR Scientific Computing Division

10

Goal 1: Advanced Network Technologies

• Advanced technologies:– Quality of service (QOS)

– Security and robustness

– Net management, including bandwidth allocation and sharing

– System engineering tools, metrics, statistics, and analysis

– New or modified protocols: routing, switching, multicast, security, etc.

– Collaborative and distributed application environment support

– Operating system improvements to support advanced services

• Achieved by funding university, Federal, and industry R&D to develop and deploy advanced services

• Done in open environment, utilizing IETF, ATM Forum, etc.

Supercomputing • Communications • Data

NCAR Scientific Computing Division

11

Goal 2: Network Infrastructure

• Subgoal 1: Develop demo net fabric that delivers 100 Mbps end-to-end to 100+ interconnected sites

– Accomplished by collaboration of Federal research institutions, telecommunications providers, and Internet providers

– Interconnect and expand vBNS (NFS), ESnet (DOE), NREN (NASA), DREN (DOD), and others (such as the Internet2/Abiliene)

– Funds universities, industry R&D, and Federal research institutions

– Subgoal 1 fabric generally expected to be highly reliable

Supercomputing • Communications • Data

NCAR Scientific Computing Division

12

Goal 2: Network Infrastructure

• Subgoal 2: Develop demo net fabric that delivers 1000 Mbps end-to-end to about 10 interconnected sites

– May be separate fabric with links to the Subgoal 1 fabric, and/or may include upgraded parts of the Subgoal 1 fabric

– Would involve very early technology implementations and wouldn't likely be as reliable as Subgoal 1 fabric

• Federal agencies would take the lead

• Commercialize advances ASAP

• Utilize IETF, ATM Forum et. al. to foster freely available commercial standards

Supercomputing • Communications • Data

NCAR Scientific Computing Division

13

Goal 3: Revolutionary Applications

• (Note: Real revolutionary applications are never found in a government-generated list)

• Some possible "revolutionary" applications:– Health care: telemedicine

– Education: distance ed; digital libraries

– Scientific research: energy, earth systems, climate, biomed research

– National Security: high-performance global data comm

– Environment: monitoring, prediction, warning, response

– Government: better delivery of services

– Emergencies: disaster response, crisis management

– Design and Manufacture: manufacturing engineering

• "NGI will not provide funding support for applications per se"; will fund addition of networking to existing apps.

Supercomputing • Communications • Data

NCAR Scientific Computing Division

14

NGI Initiative Expectations

• Fund 100+ high-performance connections to research universities and Federal research institutions

• 100+ science applications will use the new connections

• 10+ improved Federal information services

• 30+ government-industry-academia R&D partnerships

• NGI program funding leveraged by two-to-one by these partnerships

Supercomputing • Communications • Data

NCAR Scientific Computing Division

15

NGI Initiative Proposed Mangement

• NGI Implementation Team

• Under LSN Working Group

• One member from each directly funded agency

• (Not clear to me what say-so this Team has over expenditures)

Supercomputing • Communications • Data

NCAR Scientific Computing Division

16

JET: Joint Engineering Team

Supercomputing • Communications • Data

NCAR Scientific Computing Division

17

JET

• Affinity group of:– NSF (vBNS)

– NASA (NREN/NISN)

– DARPA (DREN)

– DOE (ESnet)

– UCAID/Internet2 (Abilene)

• Group that is engineering the NGIXes

Supercomputing • Communications • Data

NCAR Scientific Computing Division

18

NGIXes: Next Generation Internet Exchanges

Supercomputing • Communications • Data

NCAR Scientific Computing Division

19

NGIXes

• NAPs for Federal lab neworks to interconnect

• Layer 2

• ATM-based

• Minimum connection-speed is OC-3

• Replace FIXes (really FIX-W, FIX-E already gone)

• Three of ‘em– West coast: NASA-Ames

– East coast: NASA-Goddard

– Mid contintent: Chicago (at MREN/STARTAP)

Supercomputing • Communications • Data

NCAR Scientific Computing Division

20

21

Final Thoughts

• The NGI isn't a network: it's the improved network infrastructure that presumably results from the NGI Initiative

• The NSF’s vBNS does benefit from NGI funding.

• The Internet2/Abilene is an activity independent from the NGI Initiative, and does not really benefit from NGI funding

Supercomputing • Communications • Data

NCAR Scientific Computing Division

22

vBNS Review

Supercomputing • Communications • Data

NCAR Scientific Computing Division

23

vBNS: History

Supercomputing • Communications • Data

NCAR Scientific Computing Division

• vBNS goals– jumpstart use of high-performance networking for advanced

research while advancing research itself with high-performance networking

– supplement Commodity Internet which has been inadequate for universities since NSFnet was decommissioned

• vBNS started about 3 years ago with the NSF supercomputing centers

• vBNS started adding universities about 2 years ago

• Currently 77 institutions connected to vBNS– 21 more in progress

• 131 institutions approved for connection to vBNS

• NSF funding for vBNS ends March 2000

24

vBNS: The Network

• Operated by MCI

• ATM based network using mainly IP

• OC-12 (622-Mbps) backbone

• OC-3 (155-Mbps) & DS-3 (45-Mbps) to institutions

• 77 institutions currently connected– 21 more in progress

Supercomputing • Communications • Data

NCAR Scientific Computing Division

25

26

27

28

29

30

STAR TAP

Supercomputing • Communications • Data

NCAR Scientific Computing Division

31

STAR TAP

Supercomputing • Communications • Data

NCAR Scientific Computing Division

• Science, Technology And Research Transit Access Point

• NSF-designated NAP for attachment of international networks to the vBNS

• Colocated with MREN, NSF/Ameritech NAP, and mid continent NGIX

– Is really just a single large ATM switch

32

33

vBNS and NCAR

• NCAR was an original vBNS node

• 40 of 63 UCAR member-universities are approved for vBNS (at last check on 8/1998)

• Major benefit for UCAR and its members– greatly superior to the Commodity Internet

– example: more UNIDATA data possible

– example: terabyte data transfers possible

Supercomputing • Communications • Data

NCAR Scientific Computing Division

34

Abilene Review

Supercomputing • Communications • Data

NCAR Scientific Computing Division

35

Abilene: History

• First called the Internet2 Project

• Then non-profit UCAID (University Corporation for Advanced Internet Development) was founded

– UCAID is patterned after the UCAR model

– UCAID currently has 130 members (mostly universities)

• Abilene is the name of UCAID’s first network

• Note: Internet2 used to refer to:– the Internet organization, which is now called UCAID

– the actual network, which is now named Abilene

– the concept for a future network, soon to be reality in the form of Abilene

Supercomputing • Communications • Data

NCAR Scientific Computing Division

36

Abilene: Goals

• Goals: jumpstart use of high-performance networking for advanced research while advancing research itself with high-performance networking (same as vBNS)

• But to be operated and managed by the members themselves, like the UCAR model

• Provide an alternative when NSF support of the vBNS terminates on March 2000

Supercomputing • Communications • Data

NCAR Scientific Computing Division

37

Abilene: The Basic Network

• Uses Qwest OC48 (2.4Gbps) fiber optic backbone– grow to OC192 (9.6Gbps) fiber optic backbone

– Qwest to donate .5 billion worth of fiber leases over 5 years

• Hardware provided by Cisco Systems and Nortel (Northern Telecom)

• Internet Protocol (IP) over SONET– no ATM layer

• Uses 10 core router nodes at Qwest POPs– Denver is one of these

Supercomputing • Communications • Data

NCAR Scientific Computing Division

38

39

40

41

Abilene: Status

Supercomputing • Communications • Data

NCAR Scientific Computing Division

• Abilene soon to be designated by NSF as an NSF-approved High-Performance Network (HPN)

– puts Abilene on an equal basis with vBNS

• Abilene reached peering agreement with vBNS so NSF HPC (High Performance Connection) schools have equal access to each other regardless of vBNS or Abilene connection

• UCAID expects Abilene to come online 2/1999– UCAID expects 50 universities online on 2/1999

– UCAID expects 13 gigapops online on 2/1999

• Abilene beta network now includes a half-dozen universities

– plus exchanging routes with vBNS

42

43

44

Abilene and NCAR

• 48 of 63 UCAR member-universities are UCAID members (at last check on 8/1998)

• NSF funding of vBNS terminates March 2000

• Same benefit for UCAR and its members as vBNS– greatly superior to the Commodity Internet

– example: more UNIDATA data possible

– example: terabyte data transfers possible

Supercomputing • Communications • Data

NCAR Scientific Computing Division

45

The GigaPop Concept

Supercomputing • Communications • Data

NCAR Scientific Computing Division

46

47

GigaPops: What Good Are They?

• Share costs through sharing infrastructure

• Aggregate to a central location and share high-speed access from there

• Share Commodity Internet expenses

• Essentially statistical multiplexing of expensive high-speed resources

– at any given time much more bandwidth is available to each institution than each could afford without sharing

• Share engineering and management expertise

• More clout with vendors

Supercomputing • Communications • Data

NCAR Scientific Computing Division

48

Front Range GigaPop (FRGP)

Supercomputing • Communications • Data

NCAR Scientific Computing Division

49

FRGP: Current NCAR GigaPop Services

• vBNS access

• Shared Commodity Internet access

• Intra-Gigapop access

• Web cache hosting

• 24 x 365 NOC (Network Operation Center)

• Engineering and management

Supercomputing • Communications • Data

NCAR Scientific Computing Division

50

51

FRGP+Abilene: What Should NCAR Do?

• Why should NCAR connect to Abilene?– Abilene gives NCAR additional connectivity to most of its member

institutions

– fate of vBNS is unknown after March 2000

– 48 of 63 UCAR members are also Internet2 members

• Why should NCAR join a joint FRGP/Abilene effort?– combined FRGP/Abilene effort saves NCAR money

Supercomputing • Communications • Data

NCAR Scientific Computing Division

52

FRGP: Why NCAR as GP Operator?

• NCAR already has considerable gigapop operational experience

• NCAR is already serving the FRGP members– Abilene connection is an incremental addition to existing gigapop

– doesn’t require a completely new effort from scratch

• NCAR already has a 24 x 365 NOC– at no extra charge

• NCAR has an existing networking staff to team with the new FRGP engineer

– at no extra cost

• NCAR is university-neutral

Supercomputing • Communications • Data

NCAR Scientific Computing Division

53

FRGP: Membership Types

• “Full” members– both Commodity Internet + Abilene access

• Commodity-only members– just Commodity Internet access

Supercomputing • Communications • Data

NCAR Scientific Computing Division

54

FRGP: Full Members

• University of Colorado - Boulder

• Colorado State University

• University of Colorado - Denver

• University Corporation for Atmospheric Research

• University of Wyoming

Supercomputing • Communications • Data

NCAR Scientific Computing Division

55

FRGP: Commodity-only Members

• Colorado School of Mines

• Denver University

• University of Northern Colorado

Supercomputing • Communications • Data

NCAR Scientific Computing Division

56

FRGP: Possible Future Members

• U of C System

• NOAA/Boulder

• NIST/Boulder

• NASA/Boulder

Supercomputing • Communications • Data

NCAR Scientific Computing Division

57

FRGP: But!!!

• This is far from a done deal at this time!

• Members still have funding issues

• No agreements have yet been decided

• Etc.

Supercomputing • Communications • Data

NCAR Scientific Computing Division

58

59

FRGP: Why a Denver Gigapoint?

• Much cheaper for most members to backhaul to Denver instead of to existing NCAR gigapoint

– U of Wyoming, Colorado State, UofC Denver

• UofC Denver has computer room space that’s two blocks from Denver’s telco hotel.

• But also don’t want to re-engineer NCAR gigapoint– wanted to preserve vBNS backhaul to NCAR

– wanted to preserve MCI Commodity Internet backhaul to NCAR

– wanted to minimize changes to the existing gigapoint

• Incremental addition of Denver gigapoint is most cost-effective engineering option

Supercomputing • Communications • Data

NCAR Scientific Computing Division

60

FRGP: Routing Engineering

• Must deal with so-called policy-based routing– that is, IP forwarding based on packet source-IP-address

– example: some schools can use Abilene and some can’t

• Without high-speed source-IP-address routing, requires one forwarding-table (router) per policy

• FRGP has three identified policies at this time, for:– Commodity Internet only institutions

– Commodity Internet + Abilene institutions

– Commodity Internet + Abilene + vBNS institutions

• Use ATM and PVCs to construct the router topology to implement these policies

• Note: distributed gigapoints require care to site routers optimally

Supercomputing • Communications • Data

NCAR Scientific Computing Division

61

62

FRGP: Abilene+Commodity Budget

• $851,000 total annual recurring costs– $133,000 per Full Member (5)

– $62,000 per Commodity-only Member (3)

• $150,000 one-time Abilene equipment costs

• This includes the following costs:– existing FRGP costs

» existing Commodity Internet access costs

– new Abilene costs

• Does not include vBNS costs, campus-backhaul costs, or other local campus costs

• (Reduced per-member costs if more join)

Supercomputing • Communications • Data

NCAR Scientific Computing Division

63

FRGP: Annual Expenses

• Abilene + Commodity expenses

• Generic UCAID/Abilene costs– $25,000 UCAID annual dues (per full FRGP member: 5)

– $20,000 per-institution Abilene fee (per full FRGP member: 5)

– $110,000 per-gigapop Abilene fee (shared by 5)

– $12,000 per-gigapop Qwest/Abilene port fee (shared by 5)

• Costs specific to FRGP– $56,000 shared Boulder-Denver OC-3 link (shared by 5)

– $27,000 shared Denver-Qwest OC-3 link (shared by 5)

– $281,000 shared Commodity access fees (shared by 8)

– $140,000 other operational costs (shared by 8)

» engineer’s salary

» hardware Maintenance

» travel

Supercomputing • Communications • Data

NCAR Scientific Computing Division

64

FRGP: Abilene Implications for NCAR

• New annual expenses of about $110,000 for NCAR

• Plus NCAR’s $50,00 share of startup costs

• NCAR employs & manages new FRGP engineer

• NCAR manages additional network equipment– including new off-site equipment in Denver

• Increased engineering responsibilities for NCAR

• Increased administrative/accounting responsibilities for NCAR

Supercomputing • Communications • Data

NCAR Scientific Computing Division

65

Summary of URLs

• www.ngi.gov– www.ccic.gov/ngi/concept-Jul97

• www.vbns.net– www.vbns.net/presentations/workshop/vbns_tutorial/index.htm

• www.startap.net

• www.internet2.edu

• www.ccic.gov/jet– pointer to NAPs, major Federal networks, etc.

Supercomputing • Communications • Data

NCAR Scientific Computing Division

top related